query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
6ed6efed723bb19c458c0c8de2b74ccc
Reassembleable Disassembling
[ { "docid": "b730eb83f78fc9fb0466d9ea0e123451", "text": "Control-Flow Integrity (CFI) is a software-hardening technique. It inlines checks into a program so that its execution always follows a predetermined Control-Flow Graph (CFG). As a result, CFI is effective at preventing control-flow hijacking attacks. However, past fine-grained CFI implementations do not support separate compilation, which hinders its adoption.\n We present Modular Control-Flow Integrity (MCFI), a new CFI technique that supports separate compilation. MCFI allows modules to be independently instrumented and linked statically or dynamically. The combined module enforces a CFG that is a combination of the individual modules' CFGs. One challenge in supporting dynamic linking in multithreaded code is how to ensure a safe transition from the old CFG to the new CFG when libraries are dynamically linked. The key technique we use is to have the CFG represented in a runtime data structure and have reads and updates of the data structure wrapped in transactions to ensure thread safety. Our evaluation on SPECCPU2006 benchmarks shows that MCFI supports separate compilation, incurs low overhead of around 5%, and enhances security.", "title": "" }, { "docid": "7cb61609adf6e3c56c762d6fe322903c", "text": "In this paper, we give an overview of the BitBlaze project, a new approach to computer security via binary analysis. In particular, BitBlaze focuses on building a unified binary analysis platform and using it to provide novel solutions to a broad spectrum of different security problems. The binary analysis platform is designed to enable accurate analysis, provide an extensible architecture, and combines static and dynamic analysis as well as program verification techniques to satisfy the common needs of security applications. By extracting security-related properties from binary programs directly, BitBlaze enables a principled, root-cause based approach to computer security, offering novel and effective solutions, as demonstrated with over a dozen different security applications.", "title": "" }, { "docid": "09062173db6b5f5190ab7c8f7f6ce6fd", "text": "This paper presents component techniques essential for converting executables to a high-level intermediate representation (IR) of an existing compiler. The compiler IR is then employed for three distinct applications: binary rewriting using the compiler's binary back-end, vulnerability detection using source-level symbolic execution, and source-code recovery using the compiler's C backend. Our techniques enable complex high-level transformations not possible in existing binary systems, address a major challenge of input-derived memory addresses in symbolic execution and are the first to enable recovery of a fully functional source-code.\n We present techniques to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable (with a stack pointer) to an abstract stack (without a stack pointer). Our methods do not use symbolic, relocation, or debug information since these are usually absent in deployed executables.\n We have integrated our techniques with a prototype x86 binary framework called SecondWrite that uses LLVM as IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, produced by two different compilers (gcc and Microsoft Visual Studio compiler), three languages (C, C++, and Fortran), two operating systems (Windows and Linux) and a real world program (Apache server).", "title": "" } ]
[ { "docid": "1dcbd0c9fad30fcc3c0b6f7c79f5d04c", "text": "Anvil is a tool for the annotation of audiovisual material containing multimodal dialogue. Annotation takes place on freely definable, multiple layers (tracks) by inserting time-anchored elements that hold a number of typed attribute-value pairs. Higher-level elements (suprasegmental) consist of a sequence of elements. Attributes contain symbols or cross-level links to arbitrary other elements. Anvil is highly generic (usable with different annotation schemes), platform-independent, XMLbased and fitted with an intuitive graphical user interface. For project integration, Anvil offers the import of speech transcription and export of text and table data for further statistical processing.", "title": "" }, { "docid": "a2b7e5f8f0cbd596632e66c62f405e28", "text": "Convolutional neural networks (CNNs) leverage the great power in representation learning on regular grid data such as image and video. Recently, increasing attention has been paid on generalizing CNNs to graph or network data which is highly irregular. Some focus on graph-level representation learning while others aim to learn node-level representations. These methods have been shown to boost the performance of many graph-level tasks such as graph classification and node-level tasks such as node classification. Most of these methods have been designed for single-dimensional graphs where a pair of nodes can only be connected by one type of relation. However, many real-world graphs have multiple types of relations and they can be naturally modeled as multi-dimensional graphs with each type of relation as a dimension. Multi-dimensional graphs bring about richer interactions between dimensions, which poses tremendous challenges to the graph convolutional neural networks designed for single-dimensional graphs. In this paper, we study the problem of graph convolutional networks for multi-dimensional graphs and propose a multi-dimensional convolutional neural network model mGCN aiming to capture rich information in learning node-level representations for multi-dimensional graphs. Comprehensive experiments on real-world multi-dimensional graphs demonstrate the effectiveness of the proposed framework.", "title": "" }, { "docid": "01209a2ace1a4bc71ad4ff848bb8a3f4", "text": "For data storage outsourcing services, it is important to allow data owners to efficiently and securely verify that the storage server stores their data correctly. To address this issue, several proof-of-retrievability (POR) schemes have been proposed wherein a storage server must prove to a verifier that all of a client's data are stored correctly. While existing POR schemes offer decent solutions addressing various practical issues, they either have a non-trivial (linear or quadratic) communication complexity, or only support private verification, i.e., only the data owner can verify the remotely stored data. It remains open to design a POR scheme that achieves both public verifiability and constant communication cost simultaneously.\n In this paper, we solve this open problem and propose the first POR scheme with public verifiability and constant communication cost: in our proposed scheme, the message exchanged between the prover and verifier is composed of a constant number of group elements; different from existing private POR constructions, our scheme allows public verification and releases the data owners from the burden of staying online. We achieved these by tailoring and uniquely combining techniques such as constant size polynomial commitment and homomorphic linear authenticators. Thorough analysis shows that our proposed scheme is efficient and practical. We prove the security of our scheme based on the Computational Diffie-Hellman Problem, the Strong Diffie-Hellman assumption and the Bilinear Strong Diffie-Hellman assumption.", "title": "" }, { "docid": "4718e64540f5b8d7399852fb0e16944a", "text": "In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the network's output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases.", "title": "" }, { "docid": "be009b972c794d01061c4ebdb38cc720", "text": "The existing efforts in computer assisted semen analysis have been focused on high speed imaging and automated image analysis of sperm motility. This results in a large amount of data, and it is extremely challenging for both clinical scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval and to summarize spatio-temporal motion characteristics using static visual representations. The design of the glyphs addresses the needs for (a) encoding some 20 variables using separable visual channels, (b) supporting scientific observation of the interrelationships between different measurements and comparison between different sperm cells and their flagella, and (c) facilitating the learning of the encoding scheme by making use of appropriate visual abstractions and metaphors. As a case study, we focus this work on video visualization for computer-aided semen analysis, which has a broad impact on both biological sciences and medical healthcare. We demonstrate that glyph-based visualization can serve as a means of external memorization of video data as well as an overview of a large set of spatiotemporal measurements. It enables domain scientists to make scientific observation in a cost-effective manner by reducing the burden of viewing videos repeatedly, while providing them with a new visual representation for conveying semen statistics.", "title": "" }, { "docid": "7a0cbd5c4d09de8646745a3e24e32459", "text": "Due to the coarse granularity of data accesses and the heavy use of latches, indices in the B-tree family are not efficient for in-memory databases, especially in the context of today’s multi-core architecture. In this paper, we present PI, a Parallel in-memory skip list based Index that lends itself naturally to the parallel and concurrent environment, particularly with non-uniform memory access. In PI, incoming queries are collected, and disjointly distributed among multiple threads for processing to avoid the use of latches. For each query, PI traverses the index in a Breadth-First-Search (BFS) manner to find the list node with the matching key, exploiting SIMD processing to speed up the search process. In order for query processing to be latch-free, PI employs a light-weight communication protocol that enables threads to re-distribute the query workload among themselves such that each list node that will be modified as a result of query processing will be accessed by exactly one thread. We conducted extensive experiments, and the results show that PI can be up to three times as fast as the Masstree, a state-of-the-art B-tree based index.", "title": "" }, { "docid": "fb71d22cad59ba7cf5b9806e37df3340", "text": "Templates are effective tools for increasing the precision of natural language requirements and for avoiding ambiguities that may arise from the use of unrestricted natural language. When templates are applied, it is important to verify that the requirements are indeed written according to the templates. If done manually, checking conformance to templates is laborious, presenting a particular challenge when the task has to be repeated multiple times in response to changes in the requirements. In this article, using techniques from natural language processing (NLP), we develop an automated approach for checking conformance to templates. Specifically, we present a generalizable method for casting templates into NLP pattern matchers and reflect on our practical experience implementing automated checkers for two well-known templates in the requirements engineering community. We report on the application of our approach to four case studies. Our results indicate that: (1) our approach provides a robust and accurate basis for checking conformance to templates; and (2) the effectiveness of our approach is not compromised even when the requirements glossary terms are unknown. This makes our work particularly relevant to practice, as many industrial requirements documents have incomplete glossaries.", "title": "" }, { "docid": "dccb4e0d84d0863444a3e180a12c5778", "text": "This paper describes a systems for emotion recognition and its application on the dataset from the AV+EC 2016 Emotion Recognition Challenge. The realized system was produced and submitted to the AV+EC 2016 evaluation, making use of all three modalities (audio, video, and physiological data). Our work primarily focused on features derived from audio. The original audio features were complement with bottleneck features and also text-based emotion recognition which is based on transcribing audio by an automatic speech recognition system and applying resources such as word embedding models and sentiment lexicons. Our multimodal fusion reached CCC=0.855 on dev set for arousal and 0.713 for valence. CCC on test set is 0.719 and 0.596 for arousal and valence respectively.", "title": "" }, { "docid": "03d5eadaefc71b1da1b26f4e2923a082", "text": "Sleep is characterized by a structured combination of neuronal oscillations. In the hippocampus, slow-wave sleep (SWS) is marked by high-frequency network oscillations (approximately 200 Hz \"ripples\"), whereas neocortical SWS activity is organized into low-frequency delta (1-4 Hz) and spindle (7-14 Hz) oscillations. While these types of hippocampal and cortical oscillations have been studied extensively in isolation, the relationships between them remain unknown. Here, we demonstrate the existence of temporal correlations between hippocampal ripples and cortical spindles that are also reflected in the correlated activity of single neurons within these brain structures. Spindle-ripple episodes may thus constitute an important mechanism of cortico-hippocampal communication during sleep. This coactivation of hippocampal and neocortical pathways may be important for the process of memory consolidation, during which memories are gradually translated from short-term hippocampal to longer-term neocortical stores.", "title": "" }, { "docid": "c0e70347999c028516eb981a15b8a6c8", "text": "Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon.", "title": "" }, { "docid": "92386d23413e6f951f76e7cdc0ee0aa3", "text": "This study covers a complete overview of the theore tical rationale of application of robots, other ins tructional inter faces like CALL, MALL, m-learning, r-learning , different types of robots, their instructional ro es , their educational activities, the related researches, findings, and c hallenges of robotic assisted language learning . S ince robotic revolution, many investigators in different countries have atte mpted to utilize robots to enhance education. As ma ny people in the world have personal computers (PCs), in the followi ng years, Personal Robots (PR) may become the next tool for every one’s life. Robots not only have the attributes of CALL/MALL, but also are able for independent moveme nts, voice/visual recognition and environmental interactions, non-ver bal communication, collaboration with native speake rs, diagnosing pronunciation, video conferencing with native speak ers, native speaker tutoring, adaptability, sensing , repeatability, intelligence, mobility and human appearance. Robotaided learning (rlearning) services can be descr ibed as interactive and instructional activities which can be interacte d and performed between robots and learners in both virtual and real worlds.", "title": "" }, { "docid": "441a81522a192467b128b59d7af4c39c", "text": "Selective ensemble is a learning paradigm that follows an “overproduce and choose” strategy, where a number of candidate classifiers are trained, and a set of several classifiers that are accurate and diverse are selected to solve a problem. In this paper, the hybrid approach called D3C is presented; this approach is a hybrid model of ensemble pruning that is based on k-means clustering and the framework of dynamic selection and circulating in combination with a sequential search method. Additionally, a multilabel D3C is derived from D3C through employing a problem transformation for multi-label classification. Empirical study shows that D3C exhibits competitive performance against other high-performance methods, and experiments in multi-label datasets verify the feasibility of multi-label D3C. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b18f5df68581789312d48c65ba7afb9d", "text": "In this study, an efficient addressing scheme for radix-4 FFT processor is presented. The proposed method uses extra registers to buffer and reorder the data inputs of the butterfly unit. It avoids the modulo-r addition in the address generation; hence, the critical path is significantly shorter than the conventional radix-4 FFT implementations. A significant property of the proposed method is that the critical path of the address generator is independent from the FFT transform length N, making it extremely efficient for large FFT transforms. For performance evaluation, the new FFT architecture has been implemented by FPGA (Altera Stratix) hardware and also synthesized by CMOS 0.18µm technology. The results confirm the speed and area advantages for large FFTs. Although only radix-4 FFT address generation is presented in the paper, it can be used for higher radix FFT.", "title": "" }, { "docid": "083cb6546aecdc12c2a1e36a9b8d9b67", "text": "Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semisupervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.1", "title": "" }, { "docid": "f7f84bab2b7024ceb33fdb83af3cb0e1", "text": "OBJECTIVES\nTo examine whether indicators of community- and state-level lesbian, gay, bisexual, and transgender equality are associated with transgender veterans' mental health.\n\n\nMETHODS\nWe extracted Veterans Administration data for patients who were diagnosed with gender identity disorder, had at least 1 visit in 2013, and lived in a zip code with a Municipality Equality Index score (n = 1640). We examined the associations of whether a state included transgender status in employment nondiscrimination laws and in hate crimes laws with mood disorders; alcohol, illicit drug, and tobacco use disorders; posttraumatic stress disorder; and suicidal ideation or attempt.\n\n\nRESULTS\nNearly half (47.3%) of the sample lived in states with employment discrimination protection, and 44.8% lived in states with hate crimes protection. Employment nondiscrimination protection was associated with 26% decreased odds of mood disorders (adjusted odds ratio [AOR] = 0.74; 95% confidence interval [CI] = 0.59, 0.93) and 43% decreased odds of self-directed violence (AOR = 0.57; 95% CI = 0.34, 0.95).\n\n\nCONCLUSIONS\nUnderstanding lesbian, gay, bisexual, and transgender social stressors can inform treatment and care coordination for transgender populations.", "title": "" }, { "docid": "5594475c91355d113e0045043eff8b93", "text": "Background: Since the introduction of the systematic review process to Software Engineering in 2004, researchers have investigated a number of ways to mitigate the amount of effort and time taken to filter through large volumes of literature.\n Aim: This study aims to provide a critical analysis of text mining techniques used to support the citation screening stage of the systematic review process.\n Method: We critically re-reviewed papers included in a previous systematic review which addressed the use of text mining methods to support the screening of papers for inclusion in a review. The previous review did not provide a detailed analysis of the text mining methods used. We focus on the availability in the papers of information about the text mining methods employed, including the description and explanation of the methods, parameter settings, assessment of the appropriateness of their application given the size and dimensionality of the data used, performance on training, testing and validation data sets, and further information that may support the reproducibility of the included studies.\n Results: Support Vector Machines (SVM), Naïve Bayes (NB) and Committee of classifiers (Ensemble) are the most used classification algorithms. In all of the studies, features were represented with Bag-of-Words (BOW) using both binary features (28%) and term frequency (66%). Five studies experimented with n-grams with n between 2 and 4, but mostly the unigram was used. χ2, information gain and tf-idf were the most commonly used feature selection techniques. Feature extraction was rarely used although LDA and topic modelling were used. Recall, precision, F and AUC were the most used metrics and cross validation was also well used. More than half of the studies used a corpus size of below 1,000 documents for their experiments while corpus size for around 80% of the studies was 3,000 or fewer documents. The major common ground we found for comparing performance assessment based on independent replication of studies was the use of the same dataset but a sound performance comparison could not be established because the studies had little else in common. In most of the studies, insufficient information was reported to enable independent replication. The studies analysed generally did not include any discussion of the statistical appropriateness of the text mining method that they applied. In the case of applications of SVM, none of the studies report the number of support vectors that they found to indicate the complexity of the prediction engine that they use, making it impossible to judge the extent to which over-fitting might account for the good performance results.\n Conclusions: There is yet to be concrete evidence about the effectiveness of text mining algorithms regarding their use in the automation of citation screening in systematic reviews. The studies indicate that options are still being explored, but there is a need for better reporting as well as more explicit process details and access to datasets to facilitate study replication for evidence strengthening. In general, the reader often gets the impression that text mining algorithms were applied as magic tools in the reviewed papers, relying on default settings or default optimization of available machine learning toolboxes without an in-depth understanding of the statistical validity and appropriateness of such tools for text mining purposes.", "title": "" }, { "docid": "b0e58ee4008fbf0e2555851c7889300d", "text": "Projection technology typically places several constraints on the geometric relationship between the projector and the projection surface to obtain an undistorted, properly sized image. In this paper we describe a simple, robust, fast, and low-cost method for automatic projector calibration that eliminates many of these constraints. We embed light sensors in the target surface, project Gray-coded binary patterns to discover the sensor locations, and then prewarp the image to accurately fit the physical features of the projection surface. This technique can be expanded to automatically stitch multiple projectors, calibrate onto non-planar surfaces for object decoration, and provide a method for simple geometry acquisition.", "title": "" }, { "docid": "732d6bd47a4ab7b77d1c192315a1577c", "text": "In this paper, we address the problem of classifying image sets, each of which contains images belonging to the same class but covering large variations in, for instance, viewpoint and illumination. We innovatively formulate the problem as the computation of Manifold-Manifold Distance (MMD), i.e., calculating the distance between nonlinear manifolds each representing one image set. To compute MMD, we also propose a novel manifold learning approach, which expresses a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrating the distances between pair of subspaces respectively from one of the involved manifolds. The proposed MMD method is evaluated on the task of Face Recognition based on Image Set (FRIS). In FRIS, each known subject is enrolled with a set of facial images and modeled as a gallery manifold, while a testing subject is modeled as a probe manifold, which is then matched against all the gallery manifolds by MMD. Identification is achieved by seeking the minimum MMD. Experimental results on two public face databases, Honda/UCSD and CMU MoBo, demonstrate that the proposed MMD method outperforms the competing methods.", "title": "" }, { "docid": "3da06ce9757ee8361c61bf8be7e02287", "text": "We propose a real time, gesture based robotic arm manipulation using kinect sensor. This method uses kinect depth data, skeletal data and joint orientation data for end effector movement including roll, pitch and pinch. For the initial data, we extracted skeletal of the person, hand position and hand state. In closed hand state condition, the position of the hand is extracted and the three dimension data is used for inverse kinematics of the joints of the robotic arm. In open hand state, tilting and rolling of hand is used for pitch and roll respectively. In lasso state, pinch of end effector is perceived. The robotic arm used for the experimentation is Scorbot ER III. The system shows promising results for non-invasive robotic arm manipulation.", "title": "" }, { "docid": "b008f4477ec7bdb80bc88290a57e5883", "text": "Artificial Neural networks purport to be biomimetic, but are by definition acyclic computational graphs. As a corollary, neurons in artificial nets fire only once and have no time-dynamics. Both these properties contrast with what neuroscience has taught us about human brain connectivity, especially with regards to object recognition. We therefore propose a way to simulate feedback loops in the brain by unrolling loopy neural networks several timesteps, and investigate the properties of these networks. We compare different variants of loops, including multiplicative composition of inputs and additive composition of inputs. We demonstrate that loopy networks outperform deep feedforward networks with the same number of parameters on the CIFAR-10 dataset, as well as nonloopy versions of the same network, and perform equally well on the MNIST dataset. In order to further understand our models, we visualize neurons in loop layers with guided backprop, demonstrating that the same filters behave increasingly nonlinearly at higher unrolling levels. Furthermore, we interpret loops as attention mechanisms, and demonstrate that the composition of the loop output with the input image produces images that look qualitatively like attention maps.", "title": "" } ]
scidocsrr
d6479c88687ea4ed27b7f3acf7e29220
Motion saliency outweighs other low-level features while watching videos
[ { "docid": "1ecfa8b0ec2a4a1f3bd9356eb02c9dca", "text": "We tested the hypothesis that fixation locations during scene viewing are primarily determined by visual salience. Eye movements were collected from participants who viewed photographs of real-world scenes during an active search task. Visual salience as determined by a popular computational model did not predict region-to-region saccades or saccade sequences any better than did a random model. Consistent with other reports in the literature, intensity, contrast, and edge density differed at fixated scene regions compared to regions that were not fixated, but these fixated regions also differ in rated semantic informativeness. Therefore, any observed correlations between fixation locations and image statistics cannot be unambiguously attributed to these image statistics. We conclude that visual saliency does not account for eye movements during active search. The existing evidence is consistent with the hypothesis that cognitive factors play the dominant role in active gaze control. Elsevier AMS Ch25-I044980 Job code: EMAW 14-2-2007 1:11p.m. Page:539 Trimsize:165×240MM Basal Fonts:Times Margins:Top:4.6pc Gutter:4.6pc Font Size:10/12 Text Width:30pc Depth:43 Lines Ch. 25: Visual Saliency Does Not Account for Eye Movements During Visual Search 539 During real-world scene perception, we move our eyes about three times each second via very rapid eye movements (saccades) to reorient the high-resolving power of the fovea. Pattern information is acquired only during periods of relative gaze stability (fixations) due to a combination of central suppression and visual masking (Matin, 1974; Thiele, Henning, Buishik, & Hoffman, 2002; Volkman, 1986). Gaze control is the process of directing the eyes through a scene in real time in the service of ongoing perceptual, cognitive, and behavioral activity (Henderson, 2003; Henderson & Hollingworth, 1998, 1999). There are at least three reasons that the study of gaze control is important in real-world scene perception (Henderson, 2003; Henderson & Ferreira, 2004a). First, human vision is active, in the sense that fixation is directed toward task-relevant information as it is needed for ongoing visual and cognitive computations. Although this point seems obvious to eye movement researchers, it is often overlooked in the visual perception and visual cognition literatures. For example, much of the research on real-world scene perception has used tachistoscopic display methods in which eye movements are not possible (though see Underwood, this part; Gareze & Findlay, this part). While understanding what is initially apprehended from a scene is an important theoretical topic, it is not the whole story; vision naturally unfolds over time and multiple fixations. Any complete theory of visual cognition, therefore, requires understanding how ongoing visual and cognitive processes control the direction of the eyes in real time, and how vision and cognition are affected by where the eyes are pointed at any given moment in time. Second, eye movements provide a window into the operation of selective attention. Indeed, although internal (covert) attention and overt eye movements can be dissociated (Posner & Cohen, 1984), the strong natural relationship between covert and overt attention has recently led some investigators to suggest that studying covert visual attention independently of overt attention is misguided (Findlay, 2004; Findlay & Gilchrist, 2003). For example, as Findlay and Gilchrist (2003) have noted, much of the research in the visual search literature has proceeded as though viewers steadfastly maintain fixation during search, allocating attention only via an internal mechanism. However, visual search is virtually always accompanied by saccadic eye movements (e.g., see the chapters by Hooge, Vlaskamp, & Over, this part; Shen & Reingold, this part). In fact, studies of visual search that employ eye tracking often result in different conclusions than do studies that assume the eyes remain still. As a case in point, eye movement records reveal a much richer role for memory in the selection of information for viewing (e.g. McCarley, Wang, Kramer, Irwin, & Peterson, 2003; Peterson, Kramer, Wang, Irwin, & McCarley, 2001) than research that uses more traditional measures such as reaction time (e.g. Horowitz & Wolfe, 1998). To obtain a complete understanding of the role of memory and attention in visual cognition, it is necessary to understand eye movements. Third, because gaze is typically directed at the current focus of analysis (see Irwin, 2004, for some caveats), eye movements provide an unobtrusive, sensitive, real-time behavioral index of ongoing visual and cognitive processing. This fact has led to enormous insights into perceptual and linguistic processing in reading (Liversedge & Findlay, 2000; Rayner, 1998; Sereno & Rayner, 2003), but eye movements are only now becoming Elsevier AMS Ch25-I044980 Job code: EMAW 14-2-2007 1:11p.m. Page:540 Trimsize:165×240MM Basal Fonts:Times Margins:Top:4.6pc Gutter:4.6pc Font Size:10/12 Text Width:30pc Depth:43 Lines 540 J. M. Henderson et al. a similarly important tool in the study of visual cognition generally and scene perception in particular. 1. Fixation placement during scene viewing A fundamental goal in the study of gaze control during scene viewing is to understand the factors that determine where fixation will be placed. Two general hypotheses have been advanced to explain fixation locations in scenes. According to what we will call the visual saliency hypothesis, fixation sites are selected based on image properties generated in a bottom-up manner from the current scene. On this hypothesis, gaze control is, to a large degree, a reaction to the visual properties of the stimulus confronting the viewer. In contrast, according to what we will call the cognitive control hypothesis, fixation sites are selected based on the needs of the cognitive system in relation to the current task. On this hypothesis, eye movements are primarily controlled by task goals interacting with a semantic interpretation of the scene and memory for similar viewing episodes (Hayhoe & Ballard, 2005; Henderson & Ferreira, 2004a). On the cognitive control hypothesis, the visual stimulus is, of course, still relevant: The eyes are typically directed to objects and features rather than to uniform scene areas (Henderson & Hollingworth, 1999); however, the relevance of a particular object or feature in the stimulus is determined by cognitive information-gathering needs rather than inherent visual salience. The visual saliency hypothesis has generated a good deal of interest over the past several years, and in many ways has become the dominant view in the computational vision literature. This hypothesis has received primary support from two lines of investigation. First, computational models have been developed that use known properties of the visual system to generate a saliency map or landscape of visual salience across an image (Itti & Koch, 2000, 2001; Koch & Ullman, 1985). In these models, the visual properties present in an image give rise to a 2D map that explicitly marks regions that are different from their surround on image dimensions such as color, intensity, contrast, and edge orientation (Itti & Koch, 2000; Koch & Ullman, 1985; Parkhurst, Law, & Niebur, 2002; Torralba, 2003), contour junctions, termination of edges, stereo disparity, and shading (Koch & Ullman, 1985), and dynamic factors such as motion (Koch & Ullman, 1985; Rosenholtz, 1999). The maps are generated for each image dimension over multiple spatial scales and are then combined to create a single saliency map. Regions that are uniform along some image dimension are considered uninformative, whereas those that differ from neighboring regions across spatial scales are taken to be potentially informative and worthy of fixation. The visual saliency map approach serves an important heuristic function in the study of gaze control because it provides an explicit model that generates precise quantitative predictions about fixation locations and their sequences, and these predictions have been found to correlate with observed human fixations under some conditions (e.g., Parkhurst et al., 2002). Second, using a scene statistics approach, local scene patches surrounding fixation points have been analyzed to determine whether fixated regions differ in some image Elsevier AMS Ch25-I044980 Job code: EMAW 14-2-2007 1:11p.m. Page:541 Trimsize:165×240MM Basal Fonts:Times Margins:Top:4.6pc Gutter:4.6pc Font Size:10/12 Text Width:30pc Depth:43 Lines Ch. 25: Visual Saliency Does Not Account for Eye Movements During Visual Search 541 properties from regions that are not fixated. For example, high spatial frequency content and edge density have been found to be somewhat greater at fixated than non-fixated locations (Mannan, Ruddock, & Wooding, 1996, 1997b). Furthermore, local contrast (the standard deviation of intensity in a patch) is higher and two-point intensity correlation (intensity of the fixated point and nearby points) is lower for fixated scene patches than control patches (Krieger, Rentschler, Hauske, Schill, & Zetzsche, 2000; Parkhurst & Neibur, 2003; Reinagel & Zador, 1999). Modulating the evidence supporting the visual saliency hypothesis, recent evidence suggests that fixation sites are tied less strongly to saliency when meaningful scenes are viewed during active viewing tasks (Land & Hayhoe, 2001; Turano, Geruschat, & Baker 2003). According to one hypothesis, the modulation of visual salience by knowledge-driven control may increase over time within a scene-viewing episode as more knowledge is acquired about the identities and meanings of previously fixated objects and their relationships to each other and to the scene (Henderson, Weeks, & Hollingworth, 1999). However, even the very first saccade in a scene can often take the eyes in the likely direction of a search target, whether or not the targ", "title": "" } ]
[ { "docid": "5806b1bd779c7f39ecf2dac3f51ce267", "text": "We have conducted two investigations on the ability of human participants to solve challenging collective coordination tasks in a distributed fashion with limited perception and communication capabilities similar to those of a simple ground robot. In these investigations, participants were gathered in a laboratory of networked workstations and were given a series of different collective tasks with varying communication and perception capabilities. Here, we focus on our latest investigation and describe our methodology, platform design considerations, and highlight some interesting observed behaviors. These investigations are the preliminary phase in designing a formal strategy for learning human-inspired behaviors for solving complex distributed multirobot problems, such as pattern formation.", "title": "" }, { "docid": "3744510fa3cec75c1ccb5abbdb9d71ed", "text": "49  Abstract— Typically, computer viruses and other malware are detected by searching for a string of bits found in the virus or malware. Such a string can be viewed as a \" fingerprint \" of the virus identified as the signature of the virus. The technique of detecting viruses using signatures is known as signature based detection. Today, virus writers often camouflage their viruses by using code obfuscation techniques in an effort to defeat signature-based detection schemes. So-called metamorphic viruses transform their code as they propagate, thus evading detection by static signature-based virus scanners, while keeping their functionality but differing in internal structure. Many dynamic analysis based detection have been proposed to detect metamorphic viruses but dynamic analysis technique have limitations like difficult to learn normal behavior, high run time overhead and high false positive rate compare to static detection technique. A similarity measure method has been successfully applied in the field of document classification problem. We want to apply similarity measures methods on static feature, API calls of executable to classify it as malware or benign. In this paper we present limitations of signature based detection for detecting metamorphic viruses. We focus on statically analyzing an executable to extract API calls and count the frequency this API calls to generate the feature set. These feature set is used to classify unknown executable as malware or benign by applying various similarity function. I. INTRODUCTION In today's age, where a majority of the transactions involving sensitive information access happen on computers and over the internet, it is absolutely imperative to treat information security as a concern of paramount importance. Computer viruses and other malware have been in existence from the very early days of the personal computer and continue to pose a threat to home and enterprise users alike. A computer virus by definition is \" A program that recursively and explicitly copies a possibly evolved version of itself \" [1]. A virus copies itself to a host file or system area. Once it gets control, it multiplies itself to form newer generations. A virus may carry out damaging activities on the host machine such as corrupting or erasing files, overwriting the whole hard disk, or crashing the computer. These viruses remain harmless but", "title": "" }, { "docid": "aae02afb7099e635d2fefb89a99110db", "text": "It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and smalland large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.", "title": "" }, { "docid": "e2067c274a96763a01aa04d628301dbc", "text": "In this paper we propose a model to learn multimodal multilingual representations for matching images and sentences in different languages, with the aim of advancing multilingual versions of image search and image understanding. Our model learns a common representation for images and their descriptions in two different languages (which need not be parallel) by considering the image as a pivot between two languages. We introduce a new pairwise ranking loss function which can handle both symmetric and asymmetric similarity between the two modalities. We evaluate our models on image-description ranking for German and English, and on semantic textual similarity of image descriptions in English. In both cases we achieve state-of-the-art performance.", "title": "" }, { "docid": "13a64221ff915439d846481050e52108", "text": "This paper proposes a new maximum power point tracking (MPPT) method for photovoltaic (PV) systems by using Kalman filter. A Perturbation & Observation (P&O) method is widely used presently due to its easy implementation and simplicity. The P&O usually requires of dithering scheme to reduce noise effects, but it slows the tracking response. Tracking speed is the most important factor on improving efficiency in frequent environmental changes. The proposed method is based on the Kalman filter. It shows the fast tracking performance on noisy conditions, so that enables to generate more power in rapid weather changes than the P&O. Simulation results are provided the comparison between the proposed method and P&O on time responses for conditions of sudden system restart and sudden irradiance change.", "title": "" }, { "docid": "dec3f821a1f9fc8102450a4add31952b", "text": "Homicide by hanging is an extremely rare incident [1]. Very few cases have been reported in which a person is rendered senseless and then hanged to simulate suicidal death; though there are a lot of cases in wherein a homicide victim has been hung later. We report a case of homicidal hanging of a young Sikh individual found hanging in a well. It became evident from the results of forensic autopsy that the victim had first been given alcohol mixed with pesticides and then hanged by his turban from a well. The rare combination of lynching (homicidal hanging) and use of organo-phosporous pesticide poisoning as a means of homicide are discussed in this paper.", "title": "" }, { "docid": "4348c83744962fcc238e7f73abecfa5e", "text": "We introduce MeSys, a meaning-based approach, for solving English math word problems (MWPs) via understanding and reasoning in this paper. It first analyzes the text, transforms both body and question parts into their corresponding logic forms, and then performs inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating an extracted math quantity with its associated context information (i.e., the physical meaning of this quantity). Statistical models are proposed to select the operator and operands. A noisy dataset is designed to assess if a solver solves MWPs mainly via understanding or mechanical pattern matching. Experimental results show that our approach outperforms existing systems on both benchmark datasets and the noisy dataset, which demonstrates that the proposed approach understands the meaning of each quantity in the text more.", "title": "" }, { "docid": "7db9cf29dd676fa3df5a2e0e95842b6e", "text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.", "title": "" }, { "docid": "50898b520ede4dc878b10ba3908d5c0d", "text": "Recent advances in deep convolutional neural networks (CNNs) have motivated researchers to adapt CNNs to directly model points in 3D point clouds. Modeling local structure has been proven to be important for the success of convolutional architectures, and researchers exploited the modeling of local point sets in the feature extraction hierarchy. However, limited attention has been paid to explicitly model the geometric structure amongst points in a local region. To address this problem, we propose GeoCNN, which applies a generic convolution-like operation dubbed as GeoConv to each point and its local neighborhood. Local geometric relationships among points are captured when extracting edge features between the center and its neighboring points. We first decompose the edge feature extraction process onto three orthogonal bases, and then aggregate the extracted features based on the angles between the edge vector and the bases. This encourages the network to preserve the geometric structure in Euclidean space throughout the feature extraction hierarchy. GeoConv is a generic and efficient operation that can be easily integrated into 3D point cloud analysis pipelines for multiple applications. We evaluate Geo-CNN on ModelNet40 and KITTI and achieve state-of-the-art performance.", "title": "" }, { "docid": "57c0db8c200b94baa28779ff4f47d630", "text": "The development of the Web services lets many users easily provide their opinions recently. Automatic summarization of enormous sentiments has been expected. Intuitively, we can summarize a review with traditional document summarization methods. However, such methods have not well-discussed “aspects”. Basically, a review consists of sentiments with various aspects. We summarize reviews for each aspect so that the summary presents information without biasing to a specific topic. In this paper, we propose a method for multiaspects review summarization based on evaluative sentence extraction. We handle three features; ratings of aspects, the tf -idf value, and the number of mentions with a similar topic. For estimating the number of mentions, we apply a clustering algorithm. By integrating these features, we generate a more appropriate summary. The experiment results show the effectiveness of our method.", "title": "" }, { "docid": "fc32d0734ea83a4252339c6a2f98b0ee", "text": "The security of Android depends on the timely delivery of updates to fix critical vulnerabilities. In this paper we map the complex network of players in the Android ecosystem who must collaborate to provide updates, and determine that inaction by some manufacturers and network operators means many handsets are vulnerable to critical vulnerabilities. We define the FUM security metric to rank the performance of device manufacturers and network operators, based on their provision of updates and exposure to critical vulnerabilities. Using a corpus of 20 400 devices we show that there is significant variability in the timely delivery of security updates across different device manufacturers and network operators. This provides a comparison point for purchasers and regulators to determine which device manufacturers and network operators provide security updates and which do not. We find that on average 87.7% of Android devices are exposed to at least one of 11 known critical vulnerabilities and, across the ecosystem as a whole, assign a FUM security score of 2.87 out of 10. In our data, Nexus devices do considerably better than average with a score of 5.17; and LG is the best manufacturer with a score of 3.97.", "title": "" }, { "docid": "367ba3305217805d6068d6117a693a11", "text": "Many efforts have been devoted to training generative latent variable models with autoregressive decoders, such as recurrent neural networks (RNN). Stochastic recurrent models have been successful in capturing the variability observed in natural sequential data such as speech. We unify successful ideas from recently proposed architectures into a stochastic recurrent model: each step in the sequence is associated with a latent variable that is used to condition the recurrent dynamics for future steps. Training is performed with amortized variational inference where the approximate posterior is augmented with a RNN that runs backward through the sequence. In addition to maximizing the variational lower bound, we ease training of the latent variables by adding an auxiliary cost which forces them to reconstruct the state of the backward recurrent network. This provides the latent variables with a task-independent objective that enhances the performance of the overall model. We found this strategy to perform better than alternative approaches such as KL annealing. Although being conceptually simple, our model achieves state-of-the-art results on standard speech benchmarks such as TIMIT and Blizzard and competitive performance on sequential MNIST. Finally, we apply our model to language modeling on the IMDB dataset where the auxiliary cost helps in learning interpretable latent variables.", "title": "" }, { "docid": "2fd0aa7c412258641dc7b8c0960f0031", "text": "Quantification of biological or biochemical processes are of utmost importance for medical, biological and biotechnological applications. However, converting the biological information to an easily processed electronic signal is challenging due to the complexity of connecting an electronic device directly to a biological environment. Electrochemical biosensors provide an attractive means to analyze the content of a biological sample due to the direct conversion of a biological event to an electronic signal. Over the past decades several sensing concepts and related devices have been developed. In this review, the most common traditional techniques, such as cyclic voltammetry, chronoamperometry, chronopotentiometry, impedance spectroscopy, and various field-effect transistor based methods are presented along with selected promising novel approaches, such as nanowire or magnetic nanoparticle-based biosensing. Additional measurement techniques, which have been shown useful in combination with electrochemical detection, are also summarized, such as the electrochemical versions of surface plasmon resonance, optical waveguide lightmode spectroscopy, ellipsometry, quartz crystal microbalance, and scanning probe microscopy. The signal transduction and the general performance of electrochemical sensors are often determined by the surface architectures that connect the sensing element to the biological sample at the nanometer scale. The most common surface modification techniques, the various electrochemical transduction mechanisms, and the choice of the recognition receptor molecules all influence the ultimate sensitivity of the sensor. New nanotechnology-based approaches, such as the use of engineered ion-channels in lipid bilayers, the encapsulation of enzymes into vesicles, polymersomes, or polyelectrolyte capsules provide additional possibilities for signal amplification. In particular, this review highlights the importance of the precise control over the delicate interplay between surface nano-architectures, surface functionalization and the chosen sensor transducer principle, as well as the usefulness of complementary characterization tools to interpret and to optimize the sensor response.", "title": "" }, { "docid": "3c745325cca06061d0e11cbc30f847f9", "text": "Walsh-Hadamard transform is used in a wide variety of scientific and engineering applications, including bent functions and cryptanalytic optimization techniques in cryptography. In linear cryptanalysis, it is a key question to find a good linear approximation, which holds with probability (1 + d)/2 and the bias d is large in absolute value. Lu and Desmedt (2011) take a step toward answering this key question in a more generalized setting and initiate the work on the generalized bias problem with linearly-dependent inputs. In this paper, we give fully extended results. Deep insights on assumptions behind the problem are given. We take an information-theoretic approach to show that our bias problem assumes the setting of the maximum input entropy subject to the input constraint. By means of Walsh transform, the bias can be expressed in a simple form. It incorporates Piling-up lemma as a special case. Secondly, as application, we answer a long-standing open problem in correlation attacks on combiners with memory. We give a closed-form exact solution for the correlation involving the multiple polynomial of any weight for the first time. We also give Walsh analysis for numerical approximation. An interesting bias phenomenon is uncovered, i.e., for even and odd weight of the polynomial, the correlation behaves differently. Thirdly, we introduce the notion of weakly biased distribution, and study bias approximation for a more general case by Walsh analysis. We show that for weakly biased distribution, Piling-up lemma is still valid. Our work shows that Walsh analysis is useful and effective to a broad class of cryptanalysis problems.", "title": "" }, { "docid": "609f6bc6f77e2ecc5e0b8f2a336c0b71", "text": "Active learning is a widely-used training strategy for maximizing predictive performance subject to a fixed annotation budget. Between rounds of training, an active learner iteratively selects examples for annotation, typically based on some measure of the model’s uncertainty, coupling the acquired dataset with the underlying model. However, owing to the high cost of annotation and the rapid pace of model development, labeled datasets may remain valuable long after a particular model is surpassed by new technology. In this paper, we investigate the transferability of datasets collected with an acquisition model A to a distinct successor model S. We seek to characterize whether the benefits of active learning persist when A and S are different models. To this end, we consider two standard NLP tasks and associated datasets: text classification and sequence tagging. We find that training S on a dataset actively acquired with a (different) model A typically yields worse performance than when S is trained with “native” data (i.e., acquired actively using S), and often performs worse than training on i.i.d. sampled data. These findings have implications for the use of active learning in practice, suggesting that it is better suited to cases where models are updated no more frequently than labeled data.", "title": "" }, { "docid": "b8172acdca89e720783a803d98b271ad", "text": "Vertically stacked nanowire field effect transistors currently dominate the race to become mainstream devices for 7-nm CMOS technology node. However, these devices are likely to suffer from the issue of nanowire stack position dependent drain current. In this paper, we show that the nanowire located at the bottom of the stack is farthest away from the source/drain silicide contacts and suffers from higher series resistance as compared to the nanowires that are higher up in the stack. It is found that upscaling the diameter of lower nanowires with respect to the upper nanowires improved uniformity of the current in each nanowire, but with the drawback of threshold voltage reduction. We propose to increase source/drain trench silicide depth as a more promising solution to this problem over the nanowire diameter scaling, without compromising on power or performance of these devices.", "title": "" }, { "docid": "4f63c03e9a4d2049535a48cd7e8835d8", "text": "This article reports on a histological and morphological study on the induction of in vitro flowering in vegetatively propagated plantlets from different date palm cultivars. The study aimed to further explore the control of in vitro flower induction in relation to the photoperiodic requirements in date palm and to come up with a novel system that may allow for early sex determination through plant cycle reduction. In fact, the in vitro reversion of a shoot meristem from a vegetative to a reproductive state was achieved within 1–5 months depending on the variety considered. This reversion was accompanied by several morphological transformations that affected the apical part of the leafy bud corresponding mainly to a size increase of the prefloral meristem zone followed by the appearance of an inflorescence. The flowers that were produced in vitro were histologically and morphologically similar to those formed in vivo. The histological examination of the in vitro flowering induction process showed that the conversion into inflorescences involved the entire apical vegetative meristem of the plantlet used as a starting material and brought about a change in its anatomical structure without affecting its phyllotaxis and the leaf shape. Through alternating between hormone-free and hormone-containing media under different light/dark conditions, the highest flower induction rates were obtained with a basal Murashige and Skoog medium. A change in the architectural model of date palm was induced because unlike the natural lateral flowering, in vitro flowering was terminal. Such in vitro flower induction allowed a significant reduction in plant cycle and can, therefore, be considered a promising candidate to save time for future improvement and selection programs in date palm.", "title": "" }, { "docid": "e5380801d69c3acf7bfe36e868b1dadb", "text": "Skin-mountable chemical sensors using flexible chemically sensitive nanomaterials are of great interest for electronic skin (e-skin) application. To build these sensors, the emerging atomically thin two-dimensional (2D) layered semiconductors could be a good material candidate. Herein, we show that a large-area WS2 film synthesized by sulfurization of a tungsten film exhibits high humidity sensing performance both in natural flat and high mechanical flexible states (bending curvature down to 5 mm). The conductivity of as-synthesized WS2 increases sensitively over a wide relative humidity range (up to 90%) with fast response and recovery times in a few seconds. By using graphene as electrodes and thin polydimethylsiloxane (PDMS) as substrate, a transparent, flexible, and stretchable humidity sensor was fabricated. This senor can be well laminated onto skin and shows stable water moisture sensing behaviors in the undeformed relaxed state as well as under compressive and tensile loadings. Furthermore, its high sensing performance enables real-time monitoring of human breath, indicating a potential mask-free breath monitoring for healthcare application. We believe that such a skin-activity compatible WS2 humidity sensor may shed light on developing low power consumption wearable chemical sensors based on 2D semiconductors.", "title": "" }, { "docid": "55a6c14a7445b1903223f59ad4ad9b77", "text": "Energy and environmental issues are among the major concerns facing the global community today. Transportation fuel represents a large proportion of energy consumption, not only in the US, but also worldwide. As fossil fuel is being depleted, new substitutes are needed to provide energy. Ethanol, which has been produced mainly from the fermentation of corn starch in the US, has been regarded as one of the main liquid transportation fuels that can take the place of fossil fuel. However, limitations in the supply of starch are creating a need for different substrates. Forest biomass is believed to be one of the most abundant sources of sugars, although much research has been reported on herbaceous grass, agricultural residue, and municipal waste. The use of biomass sugars entails pretreatment to disrupt the lignin-carbohydrate complex and expose carbohydrates to enzymes. This paper reviews pretreatment technologies from the perspective of their potential use with wood, bark, and forest residues. Acetic acid catalysis is suggested for the first time to be used in steam explosion pretreatment. Its pretreatment economics, as well as that for ammonia fiber explosion pretreatment, is estimated. This analysis suggests that both are promising techniques worthy of further exploration or optimization for commercialization.", "title": "" }, { "docid": "64c06bffe4aeff54fbae9d87370e552c", "text": "Social networking sites occupy increasing fields of daily life and act as important communication channels today. But recent research also discusses the dark side of these sites, which expresses in form of stress, envy, addiction or even depression. Nevertheless, there must be a reason why people use social networking sites, even though they face related risks. One reason is human curiosity that tempts users to behave like this. The research on hand presents the impact of curiosity on user acceptance of social networking sites, which is theorized and empirically evaluated by using the technology acceptance model and a quantitative study among Facebook users. It further reveals that especially two types of human curiosity, epistemic and interpersonal curiosity, influence perceived usefulness and perceived enjoyment, and with it technology acceptance.", "title": "" } ]
scidocsrr
5775c1b90a85317d08cfbb4fd25bc977
Nonparametric General Reinforcement Learning
[ { "docid": "1e4906189e161f15d12d0e174fdaf76d", "text": "Recently, it has been shown how sampling actions from the predictive distribution over the optimal action—sometimes called Thompson sampling—can be applied to solve sequential adaptive control problems, when the optimal policy is known for each possible environment. The predictive distribution can then be constructed by a Bayesian superposition of the optimal policies weighted by their posterior probability that is updated by Bayesian inference and causal calculus. Here we discuss three important features of this approach. First, we discuss in how far such Thompson sampling can be regarded as a natural consequence of the Bayesian modeling of policy uncertainty. Second, we show how Thompson sampling can be used to study interactions between multiple adaptive agents, thus, opening up an avenue of game-theoretic analysis. Third, we show how Thompson sampling can be applied to infer causal relationships when interacting with an environment in a sequential fashion. In summary, our results suggest that Thompson sampling might not merely be a useful heuristic, but a principled method to address problems of adaptive sequential decision-making and causal inference.", "title": "" } ]
[ { "docid": "6dd0f5af407e624ae105fdf695f39320", "text": "Forecasting the behavior of the financial market is a nontrivial task that relies on the discovery of strong empirical regularities in observations of the system. These regularities are often masked by noise and the financial time series often have nonlinear and non-stationary behavior. With the rise of artificial intelligence technology and the growing interrelated markets of the last two decades offering unprecedented trading opportunities, technical analysis simply based on forecasting models is no longer enough. To meet the trading challenge in today’s global market, technical analysis must be redefined. Before using the neural network models some issues such as data preprocessing, network architecture and learning parameters are to be considered. Data normalization is a fundamental data preprocessing step for learning from data before feeding to the Artificial Neural Network (ANN). Finding an appropriate method to normalize time series data is one of the most critical steps in a data mining process. In this paper we considered two ANN models and two neuro-genetic hybrid models for forecasting the closing prices of Indian stock market. The present pursuit evaluates impact of various normalization methods on four intelligent forecasting models i.e. a simple ANN model trained with gradient descent (ANN-GD), genetic algorithm (ANN-GA), and a functional link artificial neural network model trained with GD (FLANN-GD) and genetic algorithm (FLANN-GA). The present study is applied on daily closing price of Bombay stock exchange (BSE) and several empirical as well as experimental result shows that these models can be promising tools for the Indian stock market forecasting and the prediction performance of the models are strongly influenced by the data preprocessing method used.", "title": "" }, { "docid": "020c31f1466a5cf16188993078137a93", "text": "This paper is more about the questions for a theory of language evolution than about the answers. I’d like to ask what there is for a theory of the evolution of language to explain, and I want to show how this depends on what you think language is. So, what is language? Everybody recognizes that language is partly culturally dependent: there is a huge variety of disparate languages in the world, passed down through cultural transmission. If that’s all there is to language, a theory of the evolution of language has nothing at all to explain. We need only explain the cultural evolution of languages: English, Dutch, Mandarin, Hausa, etc. are products of cultural history. However, most readers of the present volume probably subscribe to the contemporary scientific view of language, which goes beneath the cultural differences among languages. It focuses on individual language users and asks:", "title": "" }, { "docid": "80bfe7bcea0a8db5667f3f5d2c85b16b", "text": "We present a non-photorealistic algorithm for automatically retargeting images for a variety of display devices, while preserving the images' important features and qualities. Image manipulation techniques such as linear resizing and cropping work well for images containing a single important object. However, problems such as degradation of image quality and important information loss occur when these techniques have been automatically applied to images with multiple objects. Our algorithm addresses the case of multiple important objects in an image. We first segment the image, and generate an importance map based on both saliency and face detection. Regions are then resized and repositioned to fit within a specified size based on the importance map. NSF 01416284/0415083", "title": "" }, { "docid": "3b7436cf4660fb82eeb7efbf9c413159", "text": "The practical work described here was designed in the aim of combining several periods that were previously carried-out independently during the academic year and to more appropriately mimic a \"research\" environment. It illustrates several fundamental biochemical principles as well as experimental aspects and important techniques including spectrophotometry, chromatography, centrifugation, and electrophoresis. Lactate dehydrogenase (LDH) is an enzyme of choice for a student laboratory experiment. This enzyme has many advantages, namely its relative high abundance, high specific activity and high stability. In the first part, the purification scheme starting from pig heart includes ammonium sulphate fractionation, desalting by size exclusion chromatography, anion exchange chromatography and pseudo-affinity chromatography. In the second part of the work the obtained fractions are accessed for protein and activity content in order to evaluate the efficiency of the different purification steps, and are also characterised by electrophoresis using non-denaturing and denaturing conditions. Finally, in the third part, the purified enzyme is subjected to comprehensive analysis of its kinetic properties and compared to those of a commercial skeletal muscle LDH preparation. The results presented thereafter are representative of the data-sets obtained by the student-pairs and are comparable to those obtained by the instructors and the reference publications. This multistep purification of an enzyme from its source material, where students perform different purification techniques over successive laboratory days, the characterisation of the purified enzyme, and the extensive approach of enzyme kinetics, naturally fits into a project-based biochemistry learning process.", "title": "" }, { "docid": "7f368ea27e9aa7035c8da7626c409740", "text": "The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10. We mainly introduce two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance log-likelihood of test data under the conditional distributions compared to the methods of concatenation.", "title": "" }, { "docid": "c3b691cd3671011278ecd30563b27245", "text": "We formalize weighted dependency parsing as searching for maximum spanning trees (MSTs) in directed graphs. Using this representation, the parsing algorithm of Eisner (1996) is sufficient for searching over all projective trees in O(n3) time. More surprisingly, the representation is extended naturally to non-projective parsing using Chu-Liu-Edmonds (Chu and Liu, 1965; Edmonds, 1967) MST algorithm, yielding anO(n2) parsing algorithm. We evaluate these methods on the Prague Dependency Treebank using online large-margin learning techniques (Crammer et al., 2003; McDonald et al., 2005) and show that MST parsing increases efficiency and accuracy for languages with non-projective dependencies.", "title": "" }, { "docid": "8ab53b0100ce36ace61660c9c8e208b4", "text": "A novel current-pumped battery charger (CPBC) is proposed in this paper to increase the Li-ion battery charging performance. A complete charging process, consisting of three subprocesses, namely: 1) the bulk current charging process; 2) the pulsed current charging process; and 3) the pulsed float charging process, can be automatically implemented by using the inherent characteristics of current-pumped phase-locked loop (CPLL). A design example for a 700-mA ldr h Li-ion battery is built to assess the CPBC's performance. In comparison with the conventional phase-locked battery charger, the battery available capacity and charging efficiency of the proposed CPBC are improved by about 6.9% and 1.5%, respectively. The results of the experiment show that a CPLL is really suitable for carrying out a Li-ion battery pulse charger.", "title": "" }, { "docid": "bb9dbc6b86f45787c03a146cdcfdf5c4", "text": "AIM\nThe purpose of this study was to analyze tooth loss after root fractures and to assess the influence of the type of healing and the location of the root fracture. Furthermore, the actual cause of tooth loss was analyzed.\n\n\nMATERIAL AND METHODS\nLong-term survival rates were calculated using data from 492 root-fractured teeth in 432 patients. The cause of tooth loss was assessed as being the result of either pulp necrosis (including endodontic failures), new traumas or excessive mobility. The statistics used were Kaplan-Meier and the log rank method.\n\n\nRESULTS AND CONCLUSIONS\nThe location of the root fracture had a strong significant effect on tooth survival (P = 0.0001). The 10-year tooth survival of apical root fractures was 89% [95% confidence interval (CI), 78-99%], of mid-root fractures 78% (CI, 64-92%), of cervical-mid-root fractures 67% (CI, 50-85%), and of cervical fractures 33% (CI, 17-49%). The fracture-healing type offered further prognostic information. No tooth loss was observed in teeth with hard tissue fracture healing regardless of the position of the fracture. For teeth with interposition of connective tissue, the location of the fracture had a significant influence on tooth loss (P = 0.0001). For teeth with connective tissue healing, the estimated 8-year survival of apical, mid-root, and cervical-mid-root fractures were all more than 80%, whereas the estimated 8-year survival of cervical fractures was 25% (CI, 7-43%). For teeth with non-healing with interposition of granulation tissue, the location of the fracture showed a significant influence on tooth loss (P = 0.0001). The cause of tooth loss was found to be very dependent upon the location of the fracture. In conclusion, the long-term tooth survival of root fractures was strongly influenced by the type of healing and the location of the fracture.", "title": "" }, { "docid": "9efaf59beb300d1230f3a03fc7c8f72c", "text": "The growth of the Internet has enabled the popularity of open online learning platforms to increase over the years. This has led to the inception of Massive Open Online Courses (MOOCs) that globally enrol millions of people. Such courses operate under the concept of open learning, where content does not have to be delivered via standard mechanisms that institutions employ, such as physically attending lectures. Instead learning occurs online via recorded lecture material and online tasks. This shift has allowed more people to gain access to education, regardless of their learning background. However, despite these advancements, completion rates for MOOCs are low. The paper presents our approach to learner predication in MOOCs by exploring the impact that technology has on open learning and identifies how data about student performance can be captured to predict trend so that at risk students can be identified before they drop-out. The study we have undertaken uses the eRegister system, which has been developed to capture and analyze data. The results indicate that high/active engagement, interaction and attendance is reflective of higher marks. Additonally, our approach is able to normalize the data into consistent a series so that the end result can be transformed into a dashboard of statistics that can be used by organizers of the MOOC. Based on this, we conclude that there is a fundamental need for predictive systems within learning communities.", "title": "" }, { "docid": "88b5b96e381d0a526271a67d3b328115", "text": "In the last decade, it has become apparent that embedded systems are integral parts of our every day lives. The wireless nature of many embedded applications as well as their omnipresence has made the need for security and privacy preserving mechanisms particularly important. Thus, as field programmable gate arrays (FPGAs) become integral parts of embedded systems, it is imperative to consider their security as a whole. This contribution provides a state-of-the-art description of security issues on FPGAs, both from the system and implementation perspectives. We discuss the advantages of reconfigurable hardware for cryptographic applications, show potential security problems of FPGAs, and provide a list of open research problems. Moreover, we summarize both public and symmetric-key algorithm implementations on FPGAs.", "title": "" }, { "docid": "98e9dff9ba946dc1ea6d50b1271a0685", "text": "OBJECTIVES\nTo evaluate the effect of Carbopol gel formulations containing pilocarpine on the morphology and morphometry of the vaginal epithelium of castrated rats.\n\n\nMETHODS\nThirty-one female Wistar-Hannover rats were randomly divided into four groups: the control Groups I (n=7, rats in persistent estrus; positive controls) and II (n=7, castrated rats, negative controls) and the experimental Groups, III (n=8) and IV (n=9). Persistent estrus (Group I) was achieved with a subcutaneous injection of testosterone propionate on the second postnatal day. At 90 days postnatal, rats in Groups II, III and IV were castrated and treated vaginally for 14 days with Carbopol gel (vehicle alone) or Carbopol gel containing 5% and 15% pilocarpine, respectively. Next, all of the animals were euthanized and their vaginas were removed for histological evaluation. A non-parametric test with a weighted linear regression model was used for data analysis (p<0.05).\n\n\nRESULTS\nThe morphological evaluation showed maturation of the vaginal epithelium with keratinization in Group I, whereas signs of vaginal atrophy were present in the rats of the other groups. Morphometric examinations showed mean thickness values of the vaginal epithelium of 195.10±12.23 μm, 30.90±1.14 μm, 28.16±2.98 μm and 29.84±2.30 μm in Groups I, II, III and IV, respectively, with statistically significant differences between Group I and the other three groups (p<0.0001) and no differences between Groups II, III and IV (p=0.0809).\n\n\nCONCLUSION\nTopical gel formulations containing pilocarpine had no effect on atrophy of the vaginal epithelium in the castrated female rats.", "title": "" }, { "docid": "1b0bccf41db4d323ac585d46475ce6f1", "text": "For electric power transmission, high voltage overhead power lines play an important role as the costs for power transmission are comparatively low. However, the environmental conditions in many geographical regions can change over a wide range. Due to the high voltages, adequate distances between the conductors and objects in the environment have to be ensured for safety reasons. However, sag of the conductors (e.g. due to temperature variations or aging, icing of conductors as a result of extreme weather conditions) may increase safety margins and limit the operability of these power lines. Heavy loads due to icing or vibrations excited by winds increase the risk of line breakage. With online condition monitoring of power lines, critical states or states with increased wear for the conductor may be detected early and appropriate counter measures can be applied. In this paper we investigate possibilities for monitoring devices that are directly mounted onto a conductor. It is demonstrated that such a device can be powered from the electric field around the conductor and that electronic equipment can be protected from the strong electric and magnetic fields as well as transient signals due to partial discharge events.", "title": "" }, { "docid": "0c417cce8944b4d924451aa88fe2b7a3", "text": "Estimation of social influence in networks can be substantially biased in observational studies due to homophily and network correlation in exposure to exogenous events. Randomized experiments, in which the researcher intervenes in the social system and uses randomization to determine how to do so, provide a methodology for credibly estimating of causal effects of social behaviors. In addition to addressing questions central to the social sciences, these estimates can form the basis for effective marketing and public policy. In this review, we discuss the design space of experiments to measure social influence through combinations of interventions and randomizations. We define an experiment as combination of (1) a target population of individuals connected by an observed interaction network, (2) a set of treatments whereby the researcher will intervene in the social system, (3) a randomization strategy which maps individuals or edges to treatments, and (4) a measurement of an outcome of interest after treatment has been assigned. We review experiments that demonstrate potential experimental designs and we evaluate their advantages and tradeoffs for answering different types of causal questions about social influence. We show how randomization also provides a basis for statistical inference when analyzing these experiments.", "title": "" }, { "docid": "496bd9901f68646e06977ce27620752e", "text": "PURPOSE\nTo examine the longitudinal association between diet quality and depression using prospective data from the Australian Longitudinal Study on Women's Health.\n\n\nMETHODS\nWomen born in 1946-1951 (n = 7877) were followed over 9 years starting from 2001. Dietary intake was assessed using the Dietary Questionnaire for Epidemiological Studies (version 2) in 2001 and a shortened form in 2007 and 2010. Diet quality was summarised using the Australian Recommended Food Score. Depression was measured using the 10-item Centre for Epidemiologic Depression Scale and self-reported physician diagnosis. Pooled logistic regression models including time-varying covariates were used to examine associations between diet quality tertiles and depression. Women were also categorised based on changes in diet quality during 2001-2007. Analyses were adjusted for potential confounders.\n\n\nRESULTS\nThe highest tertile of diet quality was associated marginally with lower odds of depression (OR 0.94; 95 % CI 0.83, 1.00; P = 0.049) although no significant linear trend was observed across tertiles (OR 1.00; 95 % CI 0.94, 1.10; P = 0.48). Women who maintained a moderate or high score over 6 years had a 6-14 % reduced odds of depression compared with women who maintained a low score (moderate vs low score-OR 0.94; 95 % CI 0.80, 0.99; P = 0.045; high vs low score-OR 0.86; 95 % CI 0.77, 0.96; P = 0.01). Similar results were observed in analyses excluding women with prior history of depression.\n\n\nCONCLUSION\nLong-term maintenance of good diet quality may be associated with reduced odds of depression. Randomised controlled trials are needed to eliminate the possibility of residual confounding.", "title": "" }, { "docid": "e40ac3775c0891951d5f375c10928ca0", "text": "The present study investigates the role of process and social oriented smartphone usage, emotional intelligence, social stress, self-regulation, gender, and age in relation to habitual and addictive smartphone behavior. We conducted an online survey among 386 respondents. The results revealed that habitual smartphone use is an important contributor to addictive smartphone behavior. Process related smartphone use is a strong determinant for both developing habitual and addictive smartphone behavior. People who extensively use their smartphones for social purposes develop smartphone habits faster, which in turn might lead to addictive smartphone behavior. We did not find an influence of emotional intelligence on habitual or addictive smartphone behavior, while social stress positively influences addictive smartphone behavior, and a failure of self-regulation seems to cause a higher risk of addictive smartphone behavior. Finally, men experience less social stress than women, and use their smartphones less for social purposes. The result is that women have a higher chance in developing habitual or addictive smartphone behavior. Age negatively affects process and social usage, and social stress. There is a positive effect on self-regulation. Older people are therefore less likely to develop habitual or addictive smartphone behaviors. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c3782fb81ebb5c7aee910922d4accce0", "text": "A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ∼∼ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.", "title": "" }, { "docid": "675c1f41d53f826228ad7521107199df", "text": "The first 100 years of experimental psychology were dominated by 2 major schools of thought: behaviorism and cognitive science. Here the authors consider the common philosophical commitment to determinism by both schools, and how the radical behaviorists' thesis of the determined nature of higher mental processes is being pursued today in social cognition research on automaticity. In harmony with \"dual process\" models in contemporary cognitive science, which equate determined processes with those that are automatic and which require no intervening conscious choice or guidance, as opposed to \"controlled\" processes which do, the social cognition research on the automaticity of higher mental processes provides compelling evidence for the determinism of those processes. This research has revealed that social interaction, evaluation and judgment, and the operation of internal goal structures can all proceed without the intervention of conscious acts of will and guidance of the process.", "title": "" }, { "docid": "a52fce0b7419d745a85a2bba27b34378", "text": "Playing action video games enhances several different aspects of visual processing; however, the mechanisms underlying this improvement remain unclear. Here we show that playing action video games can alter fundamental characteristics of the visual system, such as the spatial resolution of visual processing across the visual field. To determine the spatial resolution of visual processing, we measured the smallest distance a distractor could be from a target without compromising target identification. This approach exploits the fact that visual processing is hindered as distractors are brought close to the target, a phenomenon known as crowding. Compared with nonplayers, action-video-game players could tolerate smaller target-distractor distances. Thus, the spatial resolution of visual processing is enhanced in this population. Critically, similar effects were observed in non-video-game players who were trained on an action video game; this result verifies a causative relationship between video-game play and augmented spatial resolution.", "title": "" }, { "docid": "63d19f75bc0baee93404488a1d307a32", "text": "Mitochondria can unfold importing precursor proteins by unraveling them from their N-termini. However, how this unraveling is induced is not known. Two candidates for the unfolding activity are the electrical potential across the inner mitochondrial membrane and mitochondrial Hsp70 in the matrix. Here, we propose that many precursors are unfolded by the electrical potential acting directly on positively charged amino acid side chains in the targeting sequences. Only precursor proteins with targeting sequences that are long enough to reach the matrix at the initial interaction with the import machinery are unfolded by mitochondrial Hsp70, and this unfolding occurs even in the absence of a membrane potential.", "title": "" }, { "docid": "0226f16f4900bab76cf7ef71c9b55eb5", "text": "The ability to anticipate the future is essential when making real time critical decisions, provides valuable information to understand dynamic natural scenes, and can help unsupervised video representation learning. State-of-art video prediction is based on complex architectures that need to learn large numbers of parameters, are potentially hard to train, slow to run, and may produce blurry predictions. In this paper, we introduce DYAN, a novel network with very few parameters and easy to train, which produces accurate, high quality frame predictions, faster than previous approaches. DYAN owes its good qualities to its encoder and decoder, which are designed following concepts from systems identification theory and exploit the dynamics-based invariants of the data. Extensive experiments using several standard video datasets show that DYAN is superior generating frames and that it generalizes well across domains.", "title": "" } ]
scidocsrr
7fa1ed836532043953fa88fbc818754a
GoWvis: A Web Application for Graph-of-Words-based Text Visualization and Summarization
[ { "docid": "85c0da82ac01a9c74aba422089c013f3", "text": "The k-truss is a type of cohesive subgraphs proposed recently for the study of networks. While the problem of computing most cohesive subgraphs is NP-hard, there exists a polynomial time algorithm for computing k-truss. Compared with k-core which is also efficient to compute, k-truss represents the “core” of a k-core that keeps the key information of, while filtering out less important information from, the k-core. However, existing algorithms for computing k-truss are inefficient for handling today’s massive networks. We first improve the existing in-memory algorithm for computing k-truss in networks of moderate size. Then, we propose two I/O-efficient algorithms to handle massive networks that cannot fit in main memory. Our experiments on real datasets verify the efficiency of our algorithms and the value of k-truss.", "title": "" } ]
[ { "docid": "432ff163e4dded948aa5a27aa440cd30", "text": "Eighty-one female and sixty-seven male undergraduates at a Malaysian university, from seven faculties and a Center for Language Studies completed a Computer Self-Efficacy Scale, Computer Anxiety Scale, and an Attitudes toward the Internet Scale and give information about their use of the Internet. This survey research investigated undergraduates’ computer anxiety, computer self-efficacy, and reported use of and attitudes toward the Internet. This study also examined differences in computer anxiety, computer selfefficacy, attitudes toward the Internet and reported use of the Internet for undergraduates with different demographic variables. The findings suggest that the undergraduates had moderate computer anxiousness, medium attitudes toward the Internet, and high computer self-efficacy and used the Internet extensively for educational purposes such as doing research, downloading electronic resources and e-mail communications. This study challenges the long perceived male bias in the computer environment and supports recent studies that have identified greater gender equivalence in interest, use, and skills levels. However, there were differences in undergraduates’ Internet usage levels based on the discipline of study. Furthermore, higher levels of Internet usage did not necessarily translate into better computer self-efficacy among the undergraduates. A more important factor in determining computer self-efficacy could be the discipline of study and undergraduates studying computer related disciplines appeared to have higher self-efficacy towards computers and the Internet. Undergraduates who used the Internet more often may not necessarily feel more comfortable using them. Possibly, other factors such as the types of application used, the purpose for using, and individual satisfaction could also influence computer self-efficacy and computer anxiety. However, although Internet usage levels may not have any impact on computer self-efficacy, higher usage of the Internet does seem to decrease the levels of computer anxiety among the undergraduates. Undergraduates with lower computer anxiousness demonstrated more positive attitudes toward the Internet in this study.", "title": "" }, { "docid": "b85330c2d0816abe6f28fd300e5f9b75", "text": "This paper presents a novel dual polarized planar aperture antenna using the low-temperature cofired ceramics technology to realize a novel antenna-in-package for a 60-GHz CMOS differential transceiver chip. Planar aperture antenna technology ensures high gain and wide bandwidth. Differential feeding is adopted to be compatible with the chip. Dual polarization makes the antenna function as a pair of single polarized antennas but occupies much less area. The antenna is ±45° dual polarized, and each polarization acts as either a transmitting (TX) or receiving (RX) antenna. This improves the signal-to-noise ratio of the wireless channel in a point-to-point communication, because the TX/RX polarization of one antenna is naturally copolarized with the RX/TX polarization of the other antenna. A prototype of the proposed antenna is designed, fabricated, and measured, whose size is 12 mm × 12 mm × 1.128 mm (2.4λ0 × 2.4λ0 × 0.226λ0). The measurement shows that the -10 dB impedance bandwidth covers the entire 60 GHz unlicensed band (57-64 GHz) for both polarizations. Within the bandwidth, the isolation between the ports of the two polarizations is better than 26 dB, and the gain is higher than 10 dBi with a peak of around 12 dBi for both polarizations.", "title": "" }, { "docid": "2a5194f83142bbaef832011d08acd780", "text": "This paper proposes a novel data-driven approach for inertial navigation, which learns to estimate trajectories of natural human motions just from an inertial measurement unit (IMU) in every smartphone. The key observation is that human motions are repetitive and consist of a few major modes (e.g., standing, walking, or turning). Our algorithm regresses a velocity vector from the history of linear accelerations and angular velocities, then corrects low-frequency bias in the linear accelerations, which are integrated twice to estimate positions. We have acquired training data with ground truth motion trajectories across multiple human subjects and multiple phone placements (e.g., in a bag or a hand). The qualitatively and quantitatively evaluations have demonstrated that our simple algorithm outperforms existing heuristic-based approaches and is even comparable to full Visual Inertial navigation to our surprise. As far as we know, this paper is the first to introduce supervised training for inertial navigation, potentially opening up a new line of research in the domain of data-driven inertial navigation. We will publicly share our code and data to facilitate further research.", "title": "" }, { "docid": "3c41bdaeaaa40481c8e68ad00426214d", "text": "Image captioning is an important task, applicable to virtual assistants, editing tools, image indexing, and support of the disabled. In recent years significant progress has been made in image captioning, using Recurrent Neural Networks powered by long-short-term-memory (LSTM) units. Despite mitigating the vanishing gradient problem, and despite their compelling ability to memorize dependencies, LSTM units are complex and inherently sequential across time. To address this issue, recent work has shown benefits of convolutional networks for machine translation and conditional image generation [9, 34, 35]. Inspired by their success, in this paper, we develop a convolutional image captioning technique. We demonstrate its efficacy on the challenging MSCOCO dataset and demonstrate performance on par with the LSTM baseline [16], while having a faster training time per number of parameters. We also perform a detailed analysis, providing compelling reasons in favor of convolutional language generation approaches.", "title": "" }, { "docid": "06f562ff86d8a2834616726a1d4b6e15", "text": "This paper reports about interest operators, region detectors and region descriptors for photogrammetric applications. Features are the primary input for many applications like registration, 3D reconstruction, motion tracking, robot navigation, etc. Nowadays many detectors and descriptors algorithms are available, providing corners, edges and regions of interest together with n-dimensional vectors useful in matching procedures. The main algorithms are here described and analyzed, together with their proprieties. Experiments concerning the repeatability, localization accuracy and quantitative analysis are performed and reported. Details on how improve to location accuracy of region detectors are also reported.", "title": "" }, { "docid": "d42cba123245ef4e07351c4983b90225", "text": "Deduplication technologies are increasingly being deployed to reduce cost and increase space-efficiency in corporate data centers. However, prior research has not applied deduplication techniques inline to the request path for latency sensitive, primary workloads. This is primarily due to the extra latency these techniques introduce. Inherently, deduplicating data on disk causes fragmentation that increases seeks for subsequent sequential reads of the same data, thus, increasing latency. In addition, deduplicating data requires extra disk IOs to access on-disk deduplication metadata. In this paper, we propose an inline deduplication solution, iDedup, for primary workloads, while minimizing extra IOs and seeks. Our algorithm is based on two key insights from realworld workloads: i) spatial locality exists in duplicated primary data; and ii) temporal locality exists in the access patterns of duplicated data. Using the first insight, we selectively deduplicate only sequences of disk blocks. This reduces fragmentation and amortizes the seeks caused by deduplication. The second insight allows us to replace the expensive, on-disk, deduplication metadata with a smaller, in-memory cache. These techniques enable us to tradeoff capacity savings for performance, as demonstrated in our evaluation with real-world workloads. Our evaluation shows that iDedup achieves 60-70% of the maximum deduplication with less than a 5% CPU overhead and a 2-4% latency impact.", "title": "" }, { "docid": "c4d0084aab61645fc26e099115e1995c", "text": "Digital transformation often includes establishing big data analytics capabilities and poses considerable challenges for traditional manufacturing organizations, such as car companies. Successfully introducing big data analytics requires substantial organizational transformation and new organizational structures and business processes. Based on the three-stage evolution of big data analytics capabilities at AUDI, the full article provides recommendations for how traditional manufacturing organizations can successfully introduce big data analytics and master the related organizational transformations. Stage I: Advancing. In Stage I, AUDI’s sales and marketing department initiated data analytics projects. Commitment within the organization for data analytics grew slowly, and the strategic importance of the area was increasingly recognized. During this first stage, the IT department played a passive role, responding to the initiators of data analytics projects. The company’s digital innovation hub, however, laid the technology foundation for big data analytics during the Advancing stage. Stage II: Enabling. In Stage II, analytics competencies were built up not only in the digital innovation hub but also in the IT department. The IT department enabled big data analytics through isolated technology activities, sometimes taking on or insourcing tasks previously carried out by external consultancies or the digital innovation hub. Analytics services were developed through a more advanced technology infrastructure as well as analytics methods. Stage III: Leveraging. In the current Stage III, AUDI is leveraging the analytics competencies of the digital innovation hub and the IT department to centrally provide analytics-as-a-service. The IT department is now fully responsible for all technology tasks and is evolving to become a consulting partner for the other big data analytics stakeholders (sales and marketing department and digital innovation hub). In particular, digital services are enabled by leveraging the most valuable data source (i.e., operational car data).", "title": "" }, { "docid": "0afcf50fa7bbe82263b8c4dec7b44fd2", "text": "Motion sensing is of fundamental importance for user interfaces and input devices. In applications, where optical sensing is preferred, traditional camera-based approaches can be prohibitive due to limited resolution, low frame rates and the required computational power for image processing. We introduce a novel set of motion-sensing configurations based on laser speckle sensing that are particularly suitable for human-computer interaction. The underlying principles allow these configurations to be fast, precise, extremely compact and low cost. We provide an overview and design guidelines for laser speckle sensing for user interaction and introduce four general speckle projector/sensor configurations. We describe a set of prototypes and applications that demonstrate the versatility of our laser speckle sensing techniques.", "title": "" }, { "docid": "f8b0dcd771e7e7cf50a05cf7221f4535", "text": "Studies on monocyte and macrophage biology and differentiation have revealed the pleiotropic activities of these cells. Macrophages are tissue sentinels that maintain tissue integrity by eliminating/repairing damaged cells and matrices. In this M2-like mode, they can also promote tumor growth. Conversely, M1-like macrophages are key effector cells for the elimination of pathogens, virally infected, and cancer cells. Macrophage differentiation from monocytes occurs in the tissue in concomitance with the acquisition of a functional phenotype that depends on microenvironmental signals, thereby accounting for the many and apparently opposed macrophage functions. Many questions arise. When monocytes differentiate into macrophages in a tissue (concomitantly adopting a specific functional program, M1 or M2), do they all die during the inflammatory reaction, or do some of them survive? Do those that survive become quiescent tissue macrophages, able to react as naïve cells to a new challenge? Or, do monocyte-derived tissue macrophages conserve a \"memory\" of their past inflammatory activation? This review will address some of these important questions under the general framework of the role of monocytes and macrophages in the initiation, development, resolution, and chronicization of inflammation.", "title": "" }, { "docid": "2df1087f3125f6a2f8acd67649bcc87f", "text": "CubeSats are positioned to play a key role in Earth Science, wherein multiple copies of the same RADAR instrument are launched in desirable formations, allowing for the measurement of atmospheric processes over a short evolutionary timescale. To achieve this goal, such CubeSats require a high-gain antenna (HGA) that fits in a highly constrained volume. This paper presents a novel mesh deployable Ka-band antenna design that folds in a 1.5 U (10 × 10 × 15 cm3) stowage volume suitable for 6 U (10 × 20 × 30 cm3) class CubeSats. Considering all aspects of the deployable mesh reflector antenna including the feed, detailed simulations and measurements show that 42.6-dBi gain and 52% aperture efficiency is achievable at 35.75 GHz. The mechanical deployment mechanism and associated challenges are also described, as they are critical components of a deployable CubeSat antenna. Both solid and mesh prototype antennas have been developed and measurement results show excellent agreement with simulations.", "title": "" }, { "docid": "b3911204471f409cf243558f1a7c11db", "text": "Process mining allows for the automated discovery of process models from event logs. These models provide insights and enable various types of model-based analysis. This paper demonstrates that the discovered process models can be extended with information to predict the completion time of running instances. There are many scenarios where it is useful to have reliable time predictions. For example, when a customer phones her insurance company for information about her insurance claim, she can be given an estimate for the remaining processing time. In order to do this, we provide a configurable approach to construct a process model, augment this model with time information learned from earlier instances, and use this to predict e.g. the completion time. To provide meaningful time predictions we use a configurable set of abstractions that allow for a good balance between “overfitting” and “underfitting”. The approach has been implemented in ProM and through several experiments using real-life event logs we demonstrate its applicability.", "title": "" }, { "docid": "553a86035f5013595ef61c4c19997d7c", "text": "This paper proposes a novel self-oscillating, boost-derived (SOBD) dc-dc converter with load regulation. This proposed topology utilizes saturable cores (SCs) to offer self-oscillating and output regulation capabilities. Conventionally, the self-oscillating dc transformer (SODT) type of scheme can be implemented in a very cost-effective manner. The ideal dc transformer provides both input and output currents as pure, ripple-free dc quantities. However, the structure of an SODT-type converter will not provide regulation, and its oscillating frequency will change in accordance with the load. The proposed converter with SCs will allow output-voltage regulation to be accomplished by varying only the control current between the transformers, as occurs in a pulse-width modulation (PWM) converter. A control network that combines PWM schemes with a regenerative function is used for this converter. The optimum duty cycle is implemented to achieve low levels of input- and output-current ripples, which are characteristic of an ideal dc transformer. The oscillating frequency will spontaneously be kept near-constant, regardless of the load, without adding any auxiliary or compensation circuits. The typical voltage waveforms of the transistors are found to be close to quasisquare. The switching surges are well suppressed, and the voltage stress of the component is well clamped. The turn-on/turn-off of the switch is zero-voltage switching (ZVS), and its resonant transition can occur over a wide range of load current levels. A prototype circuit of an SOBD converter shows 86% efficiency at 48-V input, with 12-V, 100-W output, and presents an operating frequency of 100 kHz.", "title": "" }, { "docid": "c4ab0af91f664aa6d7674f986608ab06", "text": "Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples. In particular, it was shown that a GAN objective function can be used to learn target features indistinguishable from the source ones. In this work, we extend this framework by (i) forcing the learned feature extractor to be domain-invariant, and (ii) training it through data augmentation in the feature space, namely performing feature augmentation. While data augmentation in the image space is a well established technique in deep learning, feature augmentation has not yet received the same level of attention. We accomplish it by means of a feature generator trained by playing the GAN minimax game against source features. Results show that both enforcing domain-invariance and performing feature augmentation lead to superior or comparable performance to state-of-the-art results in several unsupervised domain adaptation benchmarks.", "title": "" }, { "docid": "7177503e5a6dffcaab46009673af5eed", "text": "This paper describes a heart attack self-test application for a mobile phone that allows potential victims, without the intervention of a medical specialist, to quickly assess whether they are having a heart attack. Heart attacks can occur anytime and anywhere. Using pervasive technology such as a mobile phone and a small wearable ECG sensor it is possible to collect the user's symptoms and to detect the onset of a heart attack by analysing the ECG recordings. If the application assesses that the user is at risk, it will urge the user to call the emergency services immediately. If the user has a cardiac arrest the application will automatically determine the current location of the user and alert the ambulance services and others to the person's location.", "title": "" }, { "docid": "e99481b520c3dbd581bb7179dbe78bc1", "text": "A 6-bit 1.2 Gs/s non-calibrated flash ADC in a standard 45nm CMOS process, that achieves 0.45pJ/conv-step at full Nyquist bandwidth, is presented. Power efficient operation is achieved by a full optimization of amplifier blocks, and by innovations in the comparator and encoding stage. The performance of a non-calibrated flash ADC is directly related to device properties; a scaling analysis of our ADC in and across CMOS technologies gives insight into the excellent usability of 45nm technology for AD converter design.", "title": "" }, { "docid": "f15a7d48f3c42ccc97480204dc5c8622", "text": "We have developed a wearable upper limb support system (ULSS) for support during heavy overhead tasks. The purpose of this study is to develop the voluntary motion support algorithm for the ULSS, and to confirm the effectiveness of the ULSS with the developed algorithm through dynamic evaluation experiments. The algorithm estimates the motor intention of the wearer based on a bioelectrical signal (BES). The ULSS measures the BES via electrodes attached onto the triceps brachii, deltoid, and clavicle. The BES changes in synchronization with the motion of the wearer's upper limbs. The algorithm changes a control phase by comparing the BES and threshold values. The algorithm achieves voluntary motion support for dynamic tasks by changing support torques of the ULSS in synchronization with the control phase. Five healthy adult males moved heavy loads vertically overhead in the evaluation experiments. In a random instruction experiment, the volunteers moved in synchronization with random instructions, and we confirmed that the control phase changes in synchronization with the random instructions. In a motion support experiment, we confirmed that the average number of the vertical motion with the ULSS increased 2.3 times compared to the average number without the ULSS. As a result, the ULSS with the algorithm supports the motion voluntarily, and it has a positive effect on the support. In conclusion, we could develop the novel voluntary motion support algorithm of the ULSS.", "title": "" }, { "docid": "cb1645b5b37e99a1dac8c6af1d6b1027", "text": "In recent years, the increasing propagation of hate speech on social media and the urgent need for effective countermeasures have drawn significant investment from governments, companies, and researchers. A large number of methods have been developed for automated hate speech detection online. This aims to classify textual content into non-hate or hate speech, in which case the method may also identify the targeting characteristics (i.e., types of hate, such as race, and religion) in the hate speech. However, we notice significant difference between the performance of the two (i.e., non-hate v.s. hate). In this work, we argue for a focus on the latter problem for practical reasons. We show that it is a much more challenging task, as our analysis of the language in the typical datasets shows that hate speech lacks unique, discriminative features and therefore is found in the ‘long tail’ in a dataset that is difficult to discover. We then propose Deep Neural Network structures serving as feature extractors that are particularly effective for capturing the semantics of hate speech. Our methods are evaluated on the largest collection of hate speech datasets based on Twitter, and are shown to be able to outperform the best performing method by up to 5 percentage points in macro-average F1, or 8 percentage points in the more challenging case of identifying hateful content.", "title": "" }, { "docid": "9ff1b40d45182124af3788cd9f55935a", "text": "with increase in the size of Web, the search engine relies on Web Crawlers to build and maintain the index of billions of pages for efficient searching. The creation and maintenance of Web indices is done by Web crawlers, the crawlers recursively traverses and downloads Web pages on behalf of search engines. The exponential growth of Web poses many challenges for crawlers.This paper makes an attempt to classify all the existing crawlers on certain parameters and also identifies the various challenges to web crawlers. Keywords— WWW, URL, Mobile Crawler, Mobile Agents, Web Crawler.", "title": "" }, { "docid": "60161ef0c46b4477f0cf35356bc3602c", "text": "Differential privacy is a formal mathematical framework for quantifying and managing privacy risks. It provides provable privacy protection against a wide range of potential attacks, including those * Alexandra Wood is a Fellow at the Berkman Klein Center for Internet & Society at Harvard University. Micah Altman is Director of Research at MIT Libraries. Aaron Bembenek is a PhD student in computer science at Harvard University. Mark Bun is a Google Research Fellow at the Simons Institute for the Theory of Computing. Marco Gaboardi is an Assistant Professor in the Computer Science and Engineering department at the State University of New York at Buffalo. James Honaker is a Research Associate at the Center for Research on Computation and Society at the Harvard John A. Paulson School of Engineering and Applied Sciences. Kobbi Nissim is a McDevitt Chair in Computer Science at Georgetown University and an Affiliate Professor at Georgetown University Law Center; work towards this document was completed in part while the Author was visiting the Center for Research on Computation and Society at Harvard University. David R. O’Brien is a Senior Researcher at the Berkman Klein Center for Internet & Society at Harvard University. Thomas Steinke is a Research Staff Member at IBM Research – Almaden. Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at Harvard University. This Article is the product of a working group of the Privacy Tools for Sharing Research Data project at Harvard University (http://privacytools.seas.harvard.edu). The working group discussions were led by Kobbi Nissim. Alexandra Wood and Kobbi Nissim are the lead Authors of this Article. Working group members Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, James Honaker, Kobbi Nissim, David R. O’Brien, Thomas Steinke, Salil Vadhan, and Alexandra Wood contributed to the conception of the Article and to the writing. The Authors thank John Abowd, Scott Bradner, Cynthia Dwork, Simson Garfinkel, Caper Gooden, Deborah Hurley, Rachel Kalmar, Georgios Kellaris, Daniel Muise, Michel Reymond, and Michael Washington for their many valuable comments on earlier versions of this Article. A preliminary version of this work was presented at the 9th Annual Privacy Law Scholars Conference (PLSC 2017), and the Authors thank the participants for contributing thoughtful feedback. The original manuscript was based upon work supported by the National Science Foundation under Grant No. CNS-1237235, as well as by the Alfred P. Sloan Foundation. The Authors’ subsequent revisions to the manuscript were supported, in part, by the US Census Bureau under cooperative agreement no. CB16ADR0160001. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the Authors and do not necessarily reflect the views of the National Science Foundation, the Alfred P. Sloan Foundation, or the US Census Bureau. 210 VAND. J. ENT. & TECH. L. [Vol. 21:1:209 currently unforeseen. Differential privacy is primarily studied in the context of the collection, analysis, and release of aggregate statistics. These range from simple statistical estimations, such as averages, to machine learning. Tools for differentially private analysis are now in early stages of implementation and use across a variety of academic, industry, and government settings. Interest in the concept is growing among potential users of the tools, as well as within legal and policy communities, as it holds promise as a potential approach to satisfying legal requirements for privacy protection when handling personal information. In particular, differential privacy may be seen as a technical solution for analyzing and sharing data while protecting the privacy of individuals in accordance with existing legal or policy requirements for de-identification or disclosure limitation. This primer seeks to introduce the concept of differential privacy and its privacy implications to non-technical audiences. It provides a simplified and informal, but mathematically accurate, description of differential privacy. Using intuitive illustrations and limited mathematical formalism, it discusses the definition of differential privacy, how differential privacy addresses privacy risks, how differentially private analyses are constructed, and how such analyses can be used in practice. A series of illustrations is used to show how practitioners and policymakers can conceptualize the guarantees provided by differential privacy. These illustrations are also used to explain related concepts, such as composition (the accumulation of risk across multiple analyses), privacy loss parameters, and privacy budgets. This primer aims to provide a foundation that can guide future decisions when analyzing and sharing statistical data about individuals, informing individuals about the privacy protection they will be afforded, and designing policies and regulations for robust privacy protection.", "title": "" } ]
scidocsrr
dcb7a4237963a2b1b152ba9c49ea8d99
Design of brushless permanent-magnet DC motors for racing motorcycles
[ { "docid": "89f02dab224baf8cd41bf3b5c03e3b2f", "text": "In this paper the design for the 2004 Prius electric drive machine is investigated in detail to assess its characteristics. It takes the form a fluid-cooled interior permanent magnet machine. This information is used to generate a complete operating profile. This information is then utilized to produce a design for an alternative induction motor. The specification is found to be demanding however a design is produced for the application which is found to be operationally close to that of the current permanent magnet design and should, in theory, be cheaper to manufacture due to the absence of permanent magnets.", "title": "" } ]
[ { "docid": "f93194ad79bc896692b402fb4ab7a3ef", "text": "A growing body of literature in social science has been devoted to extracting new information from social media to assist authorities in manage crowd projects. In this paper geolocation (or spatial) based information provided in social media is investigated to utilize intelligent transportation services. Further, the general trend of travel activities during weekdays is studied. For this purpose, a dataset consisting of more than 40,000 tweets in south and west part of the Sydney metropolitan area is utilized. After a data processing effort, the tweets are clustered into seven main categories using text mining techniques, where each category represents a type of activity including shopping, recreation, and work. Unlike the previous studies in this area, the focus of this work is on the content of the tweets rather than only using geotagged data or sentiment analysis. Beside activity type, temporal and spatial distributions of activities are used in the classification exercise. Categories are mapped to the identified regions within the city of Sydney across four time slots (two peak periods and two off-peak periods). Each time slot is used to construct a network with nodes representing people, activities and locations and edges reflecting the association between the nodes. The constructed networks are used to study the trend of activities/locations in a typical working day.", "title": "" }, { "docid": "26d235dbaa2bfd6bdf81cbd78610b68c", "text": "In the information systems (IS) domain, technology adoption has been one of the most extensively researched areas. Although in the last decade various models had been introduced to address the acceptance or rejection of information systems, there is still a lack of existing studies regarding a comprehensive review and classification of researches in this area. The main objective of this study is steered toward gaining a comprehensive understanding of the progresses made in the domain of IT adoption research, by highlighting the achievements, setbacks, and prospects recorded in this field so as to be able to identify existing research gaps and prospective areas for future research. This paper aims at providing a comprehensive review on the current state of IT adoption research. A total of 330 articles published in IS ranked journals between the years 2006 and 2015 in the domain of IT adoption were reviewed. The research scope was narrowed to six perspectives, namely year of publication, theories underlining the technology adoption, level of research, dependent variables, context of the technology adoption, and independent variables. In this research, information on trends in IT adoption is provided by examining related research works to provide insights and future direction on technology adoption for practitioners and researchers. This paper highlights future research paths that can be taken by researchers who wish to endeavor in technology adoption research. It also summarizes the key findings of previous research works including statistical findings of factors that had been introduced in IT adoption studies.", "title": "" }, { "docid": "75a01a7891b480aa480a57c1ab7d2c87", "text": "Increasing population has posed insurmountable challenges to agriculture in the provision of future food security, particularly in the Middle East and North Africa (MENA) region where biophysical conditions are not well-suited for agriculture. Iran, as a major agricultural country in the MENA region, has long been in the quest for food self-sufficiency, however, the capability of its land and water resources to realize this goal is largely unknown. Using very high-resolution spatial data sets, we evaluated the capacity of Iran’s land for sustainable crop production based on the soil properties, topography, and climate conditions. We classified Iran’s land suitability for cropping as (million ha): very good 0.4% (0.6), good 2.2% (3.6), medium 7.9% (12.8), poor 11.4% (18.5), very poor 6.3% (10.2), unsuitable 60.0% (97.4), and excluded areas 11.9% (19.3). In addition to overarching limitations caused by low precipitation, low soil organic carbon, steep slope, and high soil sodium content were the predominant soil and terrain factors limiting the agricultural land suitability in Iran. About 50% of the Iran’s existing croplands are located in low-quality lands, representing an unsustainable practice. There is little room for cropland expansion to increase production but redistribution of cropland to more suitable areas may improve sustainability and reduce pressure on water resources, land, and ecosystem in Iran.", "title": "" }, { "docid": "5ca14c0581484f5618dd806a6f994a03", "text": "Many of existing criteria for evaluating Web sites quality require methods such as heuristic evaluations, or/and empirical usability tests. This paper aims at defining a quality model and a set of characteristics relating internal and external quality factors and giving clues about potential problems, which can be measured by automated tools. The first step in the quality assessment process is an automatic check of the source code, followed by manual evaluation, possibly supported by an appropriate user panel. As many existing tools can check sites (mainly considering accessibility issues), the general architecture will be based upon a conceptual model of the site/page, and the tools will export their output to a Quality Data Base, which is the basis for subsequent actions (checking, reporting test results, etc.).", "title": "" }, { "docid": "4cef84bb3a1ff5f5ed64a4149d501f57", "text": "In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is the intelligence exhibited by machines or software. It is the subfield of computer science. Artificial Intelligence is becoming a popular field in computer science as it has enhanced the human life in many areas. Artificial intelligence in the last two decades has greatly improved performance of the manufacturing and service systems. Study in the area of artificial intelligence has given rise to the rapidly growing technology known as expert system. Application areas of Artificial Intelligence is having a huge impact on various fields of life as expert system is widely used these days to solve the complex problems in various areas as science, engineering, business, medicine, weather forecasting. The areas employing the technology of Artificial Intelligence have seen an increase in the quality and efficiency. This paper gives an overview of this technology and the application areas of this technology. This paper will also explore the current use of Artificial Intelligence technologies in the PSS design to damp the power system oscillations caused by interruptions, in Network Intrusion for protecting computer and communication networks from intruders, in the medical areamedicine, to improve hospital inpatient care, for medical image classification, in the accounting databases to mitigate the problems of it and in the computer games.", "title": "" }, { "docid": "a7e7d4232bd5c923746a1ecd7b5d4a27", "text": "OBJECTIVE\nThe goal of this project was to determine whether screening different groups of elderly individuals in a general or specialty practice would be beneficial in detecting dementia.\n\n\nBACKGROUND\nEpidemiologic studies of aging and dementia have demonstrated that the use of research criteria for the classification of dementia has yielded three groups of subjects: those who are demented, those who are not demented, and a third group of individuals who cannot be classified as normal or demented but who are cognitively (usually memory) impaired.\n\n\nMETHODS\nThe authors conducted computerized literature searches and generated a set of abstracts based on text and index words selected to reflect the key issues to be addressed. Articles were abstracted to determine whether there were sufficient data to recommend the screening of asymptomatic individuals. Other research studies were evaluated to determine whether there was value in identifying individuals who were memory-impaired beyond what one would expect for age but who were not demented. Finally, screening instruments and evaluation techniques for the identification of cognitive impairment were reviewed.\n\n\nRESULTS\nThere were insufficient data to make any recommendations regarding cognitive screening of asymptomatic individuals. Persons with memory impairment who were not demented were characterized in the literature as having mild cognitive impairment. These subjects were at increased risk for developing dementia or AD when compared with similarly aged individuals in the general population.\n\n\nRECOMMENDATIONS\nThere were sufficient data to recommend the evaluation and clinical monitoring of persons with mild cognitive impairment due to their increased risk for developing dementia (Guideline). Screening instruments, e.g., Mini-Mental State Examination, were found to be useful to the clinician for assessing the degree of cognitive impairment (Guideline), as were neuropsychologic batteries (Guideline), brief focused cognitive instruments (Option), and certain structured informant interviews (Option). Increasing attention is being paid to persons with mild cognitive impairment for whom treatment options are being evaluated that may alter the rate of progression to dementia.", "title": "" }, { "docid": "0ff8c4799b62c70ef6b7d70640f1a931", "text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.", "title": "" }, { "docid": "c2fee2767395b1e9d6490956c7a23268", "text": "In this paper, we elaborate the advantages of combining two neural network methodologies, convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent neural networks, with the framework of hybrid hidden Markov models (HMM) for recognizing offline handwriting text. CNNs employ shift-invariant filters to generate discriminative features within neural networks. We show that CNNs are powerful tools to extract general purpose features that even work well for unknown classes. We evaluate our system on a Chinese handwritten text database and provide a GPU-based implementation that can be used to reproduce the experiments. All experiments were conducted with RWTH OCR, an open-source system developed at our institute.", "title": "" }, { "docid": "5f73215a12e3c8d850c8310426d34664", "text": "In this paper, we introduce a bilateral consistency metric on the surface camera (SCam) [26] for light field stereo matching to handle significant occlusions. The concept of SCam is used to model angular radiance distribution with respect to a 3D point. Our bilateral consistency metric is used to indicate the probability of occlusions by analyzing the SCams. We further show how to distinguish between on-surface and free space, textured and non-textured, and Lambertian and specular through bilateral SCam analysis. To speed up the matching process, we apply the edge preserving guided filter [14] on the consistency-disparity curves. Experimental results show that our technique outperforms both the state-of-the-art and the recent light field stereo matching methods, especially near occlusion boundaries.", "title": "" }, { "docid": "476d80eda71ba451c740c4cb36a0042f", "text": "This paper summarizes some of the literature on causal effects in mediation analysis. It presents causally-defined direct and indirect effects for continuous, binary, ordinal, nominal, and count variables. The expansion to non-continuous mediators and outcomes offers a broader array of causal mediation analyses than previously considered in structural equation modeling practice. A new result is the ability to handle mediation by a nominal variable. Examples with a binary outcome and a binary, ordinal or nominal mediator are given using Mplus to compute the effects. The causal effects require strong assumptions even in randomized designs, especially sequential ignorability, which is presumably often violated to some extent due to mediator-outcome confounding. To study the effects of violating this assumption, it is shown how a sensitivity analysis can be carried out. This can be used both in planning a new study and in evaluating the results of an existing study.", "title": "" }, { "docid": "bcdcefd92adfd738e1d1a7fcd156f3a9", "text": "Sophisticated commercial products and product assemblies can greatly benefit from novel IT-based approaches to the conditioning of these products and of „product knowledge‟, leading to what we call Smart Products. The paper motivates the need for such novel approaches, introduces important relevant challenges and research domains, and provides an early definition of Smart Products.", "title": "" }, { "docid": "68cb8836a07846d19118d21383f6361a", "text": "Background: Dental rehabilitation of partially or totally edentulous patients with oral implants has become a routine treatment modality in the last decades, with reliable long-term results. However, unfavorable local conditions of the alveolar ridge, due to atrophy, periodontal disease, and trauma sequelae may provide insufficient bone volume or unfavorable vertical, horizontal, and sagittal intermaxillary relationships, which may render implant placement impossible or incorrect from a functional and esthetic viewpoint. The aim of the current review is to discuss the different strategies for reconstruction of the alveolar ridge defect for implant placement. Study design: The study design includes a literature review of the articles that address the association between Reconstruction of Mandibular Alveolar Ridge Defects and Implant Placement. Results: Yet, despite an increasing number of publications related to the correction of deficient alveolar ridges, much controversy still exists concerning which is the more suitable and reliable technique. This is often because the publications are of insufficient methodological quality (inadequate sample size, lack of well-defined exclusion and inclusion criteria, insufficient follow-up, lack of well-defined success criteria, etc.). Conclusion: On the basis of available data it is difficult to conclude that a particular surgical procedure offered better outcome as compared to another. Hence the practical use of the available bone augmentation procedures for dental implants depends on the clinician’s preference in general and the clinical findings in the patient in particular. Surgical techniques that reduce trauma, preserve and augment the alveolar ridge represent key areas in the goal to optimize implant results.", "title": "" }, { "docid": "122fe53f1e745480837a23b68e62540a", "text": "The images degraded by fog suffer from poor contrast. In order to remove fog effect, a Contrast Limited Adaptive Histogram Equalization (CLAHE)-based method is presented in this paper. This method establishes a maximum value to clip the histogram and redistributes the clipped pixels equally to each gray-level. It can limit the noise while enhancing the image contrast. In our method, firstly, the original image is converted from RGB to HSI. Secondly, the intensity component of the HSI image is processed by CLAHE. Finally, the HSI image is converted back to RGB image. To evaluate the effectiveness of the proposed method, we experiment with a color image degraded by fog and apply the edge detection to the image. The results show that our method is effective in comparison with traditional methods. KeywordsCLAHE, fog, degraded, remove, color image, HSI, edge detection.", "title": "" }, { "docid": "929e38444830abb7ce79d9657bbf6ae1", "text": "12 In this work, we implemented bi-directional LSTM-RNN network to solve 13 the reading comprehension problem. The problem is, given a question and a 14 context (contains the answer to the question), find the answer in the context. 15 Following the method in paper [11], we use bi-attention to make the link 16 from question to context and from context to question, to make good use of 17 the information of relationship between the two parts. By using inner 18 product, we find the probabilities of the context word to be the first or last 19 word of answer. Also, we used some improvement to the paper reducing the 20 training time and improving the accuracy. After adjusting parameters, the 21 best model has performance of F1=48% and EM=33% leaderboard. 22 23", "title": "" }, { "docid": "8626803a7fd8a2190f4d6c4b56b04489", "text": "Quotes, or quotations, are well known phrases or sentences that we use for various purposes such as emphasis, elaboration, and humor. In this paper, we introduce a task of recommending quotes which are suitable for given dialogue context and we present a deep learning recommender system which combines recurrent neural network and convolutional neural network in order to learn semantic representation of each utterance and construct a sequence model for the dialog thread. We collected a large set of twitter dialogues with quote occurrences in order to evaluate proposed recommender system. Experimental results show that our approach outperforms not only the other state-of-the-art algorithms in quote recommendation task, but also other neural network based methods built for similar tasks.", "title": "" }, { "docid": "4eb18be4c47035477e8466fb346cc60a", "text": "Technology computer aided design (TCAD) simulations are conducted on a 4T PPD pixel, on a conventional gated photodiode (PD), and finally on a radiation hardened pixel. Simulations consist in demonstrating that it is possible to reduce the dark current due to interface states brought by the adjacent gate (AG), by means of a sharing mechanism between the PD and the drain. The sharing mechanism is activated and controlled by polarizing the AG at a positive OFF voltage, and consequently the dark current is reduced and not compensated. The drawback of the dark current reduction is a reduction of the full well capacity of the PD, which is not a problem when the pixel saturation is limited by the readout chain. Some measurements performed on pixel arrays confirm the TCAD results.", "title": "" }, { "docid": "1634b893909c900194f0f936d3dcdc10", "text": "The 2011 National Electrical Code® (NEC®) added Article 690.11 that requires photovoltaic (PV) systems on or penetrating a building to include a listed DC arc fault protection device. To fill this new market, manufacturers are developing new Arc Fault Circuit Interrupters (AFCIs). Comprehensive and challenging testing has been conducted using a wide range of PV technologies, system topologies, loads and noise sources. The Distributed Energy Technologies Laboratory (DETL) at Sandia National Laboratories (SNL) has used multiple reconfigurable arrays with a variety of module technologies, inverters, and balance of system (BOS) components to characterize new Photovoltaic (PV) DC AFCIs and Arc Fault Detectors (AFDs). The device's detection capabilities, characteristics and nuisance tripping avoidance were the primary purpose of the testing. SNL and Eaton Corporation collaborated to test an Eaton AFD prototype and quantify arc noise for a wide range of PV array configurations and the system responses. The tests were conducted by generating controlled, series PV arc faults between PV modules. Arc fault detection studies were performed on systems using aged modules, positive- and negative-grounded arrays, DC/DC converters, 3-phase inverters, and on strings with branch connectors. The tests were conducted to determine if nuisance trips would occur in systems using electrically noisy inverters, with series arc faults on parallel strings, and in systems with inverters performing anti-islanding and maximum power point tracking (MPPT) algorithms. The tests reported herein used the arc fault detection device to indicate when the trip signal was sent to the circuit interrupter. Results show significant noise is injected into the array from the inverter but AFCI functionality of the device was generally stable. The relative locations of the arc fault and detector had little influence on arc fault detection. Lastly, detection of certain frequency bands successfully differentiated normal operational noise from an arc fault signal.", "title": "" }, { "docid": "9ad1acc78312d66f3e37dfb39f4692df", "text": "This work targets human action recognition in video. While recent methods typically represent actions by statistics of local video features, here we argue for the importance of a representation derived from human pose. To this end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN) for action recognition. The descriptor aggregates motion and appearance information along tracks of human body parts. We investigate different schemes of temporal aggregation and experiment with P-CNN features obtained both for automatically estimated and manually annotated human poses. We evaluate our method on the recent and challenging JHMDB and MPII Cooking datasets. For both datasets our method shows consistent improvement over the state of the art.", "title": "" }, { "docid": "dd50ef22ed75db63254df4dc369d6891", "text": "—Speech Recognition by computer is a process where speech signals are automatically converted into the corresponding sequence of words in text. When the training and testing conditions are not similar, statistical speech recognition algorithms suffer from severe degradation in recognition accuracy. So we depend on intelligent and recognizable sounds for common communications. In this research, word inputs are recognized by the system and executed in the form of text corresponding to the input word. In this paper, we propose a hybrid model by using a fully connected hidden layer between the input state nodes and the output. We have proposed a new objective function for the neural network using a combined framework of statistical and neural network based classifiers. We have used the hybrid model of Radial Basis Function and the Pattern Matching method. The system was trained by Indian English word consisting of 50 words uttered by 20 male speakers and 20 female speakers. The test samples comprised 30 words spoken by a different set of 20 male speakers and 20 female speakers. The recognition accuracy is found to be 91% which is well above the previous results.", "title": "" }, { "docid": "9bdd2cf41bf5b967ef443855b1b49e0e", "text": "We propose a Label Propagation based algorithm for weakly supervised text classification. We construct a graph where each document is represented by a node and edge weights represent similarities among the documents. Additionally, we discover underlying topics using Latent Dirichlet Allocation (LDA) and enrich the document graph by including the topics in the form of additional nodes. The edge weights between a topic and a text document represent level of “affinity” between them. Our approach does not require document level labelling, instead it expects manual labels only for topic nodes. This significantly minimizes the level of supervision needed as only a few topics are observed to be enough for achieving sufficiently high accuracy. The Label Propagation Algorithm is employed on this enriched graph to propagate labels among the nodes. Our approach combines the advantages of Label Propagation (through document-document similarities) and Topic Modelling (for minimal but smart supervision). We demonstrate the effectiveness of our approach on various datasets and compare with state-of-the-art weakly supervised text classification approaches.", "title": "" } ]
scidocsrr
d21a5ae7a7bd89a24b0cfb87374690af
Clinical Alarms in Intensive Care Units: Perceived Obstacles of Alarm Management and Alarm Fatigue in Nurses.
[ { "docid": "2779c7ffd399b12c762ca40dd8589700", "text": "BACKGROUND\nReliance on physiological monitors to continuously \"watch\" patients and to alert the nurse when a serious rhythm problem occurs is standard practice on monitored units. Alarms are intended to alert clinicians to deviations from a predetermined \"normal\" status. However, alarm fatigue may occur when the sheer number of monitor alarms overwhelms clinicians, possibly leading to alarms being disabled, silenced, or ignored.\n\n\nPURPOSE\nExcessive numbers of monitor alarms and fear that nurses have become desensitized to these alarms was the impetus for a unit-based quality improvement project.\n\n\nMETHODS\nSmall tests of change to improve alarm management were conducted on a medical progressive care unit. The types and frequency of monitor alarms in the unit were assessed. Nurses were trained to individualize patients' alarm parameter limits and levels. Monitor software was modified to promote audibility of critical alarms.\n\n\nRESULTS\nCritical monitor alarms were reduced 43% from baseline data. The reduction of alarms could be attributed to adjustment of monitor alarm defaults, careful assessment and customization of monitor alarm parameter limits and levels, and implementation of an interdisciplinary monitor policy.\n\n\nDISCUSSION\nAlthough alarms are important and sometimes life-saving, they can compromise patients' safety if ignored. This unit-based quality improvement initiative was beneficial as a starting point for revamping alarm management throughout the institution.", "title": "" } ]
[ { "docid": "bbfa632dc8e262fd30addc3ac97f1501", "text": "Chemical Organization Theory (COT) is a recently developed formalism inspired by chemical reactions. Because of its simplicity, generality and power, COT seems able to tackle a wide variety of problems in the analysis of complex, self-organizing systems across multiple disciplines. The elements of the formalism are resources and reactions, where a reaction (which has the form a + b + ... → c + d +...) maps a combination of resources onto a new combination. The resources on the input side are “consumed” by the reaction, which “produces” the resources on the output side. Thus, a reaction represents an elementary process that transforms resources into new resources. Reaction networks tend to self-organize into invariant subnetworks, called “organizations”, which are attractors of their dynamics. These are characterized by closure (no new resources are added) and self-maintenance (no existing resources are lost). Thus, they provide a simple model of autopoiesis: the organization persistently recreates its own components. Organizations can be more or less resilient in the face of perturbations, depending on properties such as the size of their basin of attraction or the redundancy of their reaction pathways. Concrete applications of organizations can be found in autocatalytic cycles, metabolic or genetic regulatory networks, ecosystems, sustainable development, and social systems.", "title": "" }, { "docid": "6cc3f51b56261c1b51da88fb9deaa893", "text": "We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).", "title": "" }, { "docid": "6bd7ec31948858820d6ef9bcd700cf5a", "text": "Methylphenidate (MPH) is a first line option in the psychopharmacologic treatment of adults with Attention-Deficit/Hyperactivity Disorder (ADHD). However, there is a considerable proportion of adult patients who do not respond to treatment with MPH or discontinue drug therapy. Since effects of genetic variants in the response to MPH treatment might explain these negative outcomes, we conducted an electronic systematic search of MEDLINE-indexed literature looking for articles containing information about pharmacogenetics of ADHD in adults published until January, 2012. The keywords used were 'ADHD', 'Attention-Deficit/Hyperactivity Disorder' and 'gene' in combination with methylphenidate, amphetamine or atomoxetine. Only 5 pharmacogenetic studies on adult ADHD met inclusion criteria. The results evidenced that most findings obtained so far are negative, and all studies focused on MPH response. There is only one positive result, for a polymorphism at the dopamine transporter gene (DAT1) gene. The current state of the art in adult ADHD implies that pharmacogenetic tests are far from routine clinical practice. However, the integration of these studies with neuroimaging and neuropsychological tests may help to understand mechanisms of drug action and the pathophysiology of ADHD.", "title": "" }, { "docid": "eac09ce6dbdc5da8046a7b960b1259f0", "text": "OONP: Overview An OONP parser consists of a Reader equipped with read/write heads, Inline Memory that represents the document, and Carry-on Memory that summarizes the current understanding of the document at each time step. For each document to parse, OONP first preprocesses it and puts it into the Inline Memory, and then Reader controls the read-heads to sequentially go through the Inline Memory and at the same time update the Carry-on Memory. Object-oriented Neural Programming (OONP) for Document Understanding Zhengdong Lu 1, Xianggen Liu 1,2, Haotian Cui 1,2, Yukun Yan 1,2, Daqi Zheng 1 1DeeplyCurious.ai, 2Tsinghua University", "title": "" }, { "docid": "ce5c0f59953e8672da5e413230c4d8d2", "text": "Multivariate volumetric datasets are often encountered in results generated by scientific simulations. Compared to univariate datasets, analysis and visualization of multivariate datasets are much more challenging due to the complex relationships among the variables. As an effective way to visualize and analyze multivariate datasets, volume rendering has been frequently used, although designing good multivariate transfer functions is still non-trivial. In this paper, we present an interactive workflow to allow users to design multivariate transfer functions. To handle large scale datasets, in the preprocessing stage we reduce the number of data points through data binning and aggregation, and then a new set of data points with a much smaller size are generated. The relationship between all pairs of variables is presented in a matrix juxtaposition view, where users can navigate through the different subspaces. An entropy based method is used to help users to choose which subspace to explore. We proposed two weights: scatter weight and size weight that are associated with each projected point in those different subspaces. Based on those two weights, data point filter and kernel density estimation operations are employed to assist users to discover interesting features. For each user-selected feature, a Gaussian function is constructed and updated incrementally. Finally, all those selected features are visualized through multivariate volume rendering to reveal the structure of data. With our system, users can interactively explore different subspaces and specify multivariate transfer functions in an effective way. We demonstrate the effectiveness of our system with several multivariate volumetric datasets.", "title": "" }, { "docid": "1bc4aabbc8aed4f3034358912d9728d5", "text": "Anjali Mishra1, Amit Mishra2 1 Master’s Degree Student, Electronics and Communication Engineering 2 Assistant Professor, Electronics and Communication Engineering 1,2 Vindhya Institute of Technology & Science, Jabalpur, Madhya Pradesh, India PIN – 482021 Email: 10309anjali@gmail.com , 2 amit12488@gmail.com ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Cognitive Radio presents a new opportunity area to explore for better utilization of a scarce natural resource like spectrum which is under focus due to increased presence of new communication devices, density of users and development of new data intensive applications. Cognitive Radio utilizes dynamic utilization of spectrum and is positioned as a promising solution to spectrum underutilization problem. However, reliability of a CR system in a noisy environment remains a challenge area. Especially manmade impulsive noise makes spectrum sensing difficult. In this paper we have presented a simulation model to analyze the effect of impulsive noise in Cognitive Radio system. Primary user detection in presence of impulsive noise is investigated for different noise thresholds and other signal parameters of interest using the unconventional power spectral density based detection approach. Also, possible alternatives for accurate primary user detection which are of interest for future research in this area are discussed for practical implementation.", "title": "" }, { "docid": "39daa09f2e57903abe1109335127d4b9", "text": "Semantic search promises to provide more accurate result than present-day keyword search. However, progress with semantic search has been delayed due to the complexity of its query languages. In this paper, we explore a novel approach of adapting keywords to querying the semantic web: the approach automatically translates keyword queries into formal logic queries so that end users can use familiar keywords to perform semantic search. A prototype system named ‘SPARK’ has been implemented in light of this approach. Given a keyword query, SPARK outputs a ranked list of SPARQL queries as the translation result. The translation in SPARK consists of three major steps: term mapping, query graph construction and query ranking. Specifically, a probabilistic query ranking model is proposed to select the most likely SPARQL query. In the experiment, SPARK achieved an encouraging translation result.", "title": "" }, { "docid": "88e59d7830d63fe49b1a4d49726b01db", "text": "Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs), commonly represented as tree structures. Annotating NL utterances with their corresponding MRs is expensive and timeconsuming, and thus the limited availability of labeled data often becomes the bottleneck of data-driven, supervised models. We introduce STRUCTVAE, a variational auto-encoding model for semisupervised semantic parsing, which learns both from limited amounts of parallel data, and readily-available unlabeled NL utterances. STRUCTVAE models latent MRs not observed in the unlabeled data as treestructured latent variables. Experiments on semantic parsing on the ATIS domain and Python code generation show that with extra unlabeled data, STRUCTVAE outperforms strong supervised models.1", "title": "" }, { "docid": "82180726cc1aaaada69f3b6cb0e89acc", "text": "The wheelchair is the major means of transport for physically disabled people. However, it cannot overcome architectural barriers such as curbs and stairs. In this paper, the authors proposed a method to avoid falling down of a wheeled inverted pendulum type robotic wheelchair for climbing stairs. The problem of this system is that the feedback gain of the wheels cannot be set high due to modeling errors and gear backlash, which results in the movement of wheels. Therefore, the wheels slide down the stairs or collide with the side of the stairs, and finally the wheelchair falls down. To avoid falling down, the authors proposed a slider control strategy based on skyhook model in order to decrease the movement of wheels, and a rotary link control strategy based on the staircase dimensions in order to avoid collision or slide down. The effectiveness of the proposed fall avoidance control strategy was validated by ODE simulations and the prototype wheelchair. Keywords—EPW, fall avoidance control, skyhook, wheeled inverted pendulum.", "title": "" }, { "docid": "c250963a2b536a9ce9149f385f4d2a0f", "text": "The systematic review (SR) is a methodology used to find and aggregate all relevant existing evidence about a specific research question of interest. One of the activities associated with the SR process is the selection of primary studies, which is a time consuming manual task. The quality of primary study selection impacts the overall quality of SR. The goal of this paper is to propose a strategy named “Score Citation Automatic Selection” (SCAS), to automate part of the primary study selection activity. The SCAS strategy combines two different features, content and citation relationships between the studies, to make the selection activity as automated as possible. Aiming to evaluate the feasibility of our strategy, we conducted an exploratory case study to compare the accuracy of selecting primary studies manually and using the SCAS strategy. The case study shows that for three SRs published in the literature and previously conducted in a manual implementation, the average effort reduction was 58.2 % when applying the SCAS strategy to automate part of the initial selection of primary studies, and the percentage error was 12.98 %. Our case study provided confidence in our strategy, and suggested that it can reduce the effort required to select the primary studies without adversely affecting the overall results of SR.", "title": "" }, { "docid": "8d60e22baf2a7f8090837c74d1873504", "text": "Myxobolus filamentum sp. n. was found infecting gill filaments of three of 39 Brycon orthotaenia Günther specimens examined (8%), which were taken from the river São Francisco in Minas Gerais state, Brazil. Plasmodia of the parasite were white and long, measuring 5 mm in lenght. Mature spores of M. filamentum sp. n. were oval from the frontal view and biconvex from the lateral view, measuring 7.5-9.7 µm (9.0 ± 0.3 µm) in length and 5.2-7.3 µm (6.2 ± 0.4 µm) in width. The polar capsules were elongated and equal in size, measuring 3.8-5.5 µm (4.7 ± 0.3 µm) in length and 1.3-2.2 µm (1.7 ± 0.1 µm) in width. The development of the parasite led to compression of the adjacent tissues and inflammatory infiltrate with granulocytic cells. Ultrastructural observation revealed that the plasmodia were delimited by two membranes, which had numerous and extensive pinocytotic channels extending into the wide ectoplasm zone. The plasmodial wall exhibited abundant villi-like projections and a thin layer of granular material prevented direct contact between the plasmodial wall and the host tissue. Phylogenetic analysis, based on 18S rDNA, showed M. filamentum sp. n. as a sister species of Myxobolus oliveirai Milanin, Eiras, Arana, Maia, Alves, Silva, Carriero, Ceccarelli et Adriano, 2010, a parasite of other fish species of the genus Brycon Müller et Troschel from South America.", "title": "" }, { "docid": "caeb8b671a1e0e467200d1b4ef69fde4", "text": "Adult degenerative lumbar scoliosis is a 3-dimensional deformity defined as a coronal deviation of greater than 10°. It causes significant pain and disability in the elderly. With the aging of the population, the incidence of adult degenerative lumbar scoliosis will continue to increase. During the past decade, advancements in surgical techniques and instrumentation have changed the management of adult spinal deformity and led to improved long-term outcomes. In this article, the authors provide a comprehensive review of the pathophysiology, diagnosis, and management of adult degenerative lumbar scoliosis. [Orthopedics. 2017; 40(6):e930-e939.].", "title": "" }, { "docid": "aeadbf476331a67bec51d5d6fb6cc80b", "text": "Gamification, an emerging idea for using game-design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, little research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users, that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and at the same time, advance", "title": "" }, { "docid": "c6d2e6f1ca41d65fc516918a26b9e52a", "text": "A novel design of a multiband monopole mobile phone antenna with circular polarization for GNSS application is presented. The proposed antenna generates four resonant frequencies with branch lines and a shorted parasitic strip to obtain a wide operating band. With the definition of 2.5:1 VSWR, the bandwidth covers several wireless communication systems, including GSM (880 ~ 960 MHz), DCS (1710 ~ 1880 MHz), PCS (1850 ~ 1990 MHz), UMTS (1920 ~ 2170 MHz), WiBro (2300 ~ 2390 MHz) and ISM (2400 ~ 2483 MHz), and also covers GNSS, including COMPASS (1559.052 ~ 1591.788 MHz), GPS (1575.42 ± 5 MHz), GLONASS (1602 ~ 1615.5 MHz). A tuning stub is added to the ground plane and the feeding strip is mounted 45 ° at the corner to achieve circular polarization for GNSS application. The 3 dB axial ratio (AR) bandwidth (AR-BW) is obtained from 1540 to 1630 MHz, covering the L1 band of GNSS, including COMPASS, GPS and GLONASS. In the 3 dB axial ratio bandwidth, right hand and left hand circularly polarizations are obtained in different broadside directions, with the peak circularly polarized gain of more than 2.7 dBic. An equivalent circuit network is used to analyze the mechanism of circular polarization. Details of the proposed antenna parameters, including return loss, radiation characteristics, and AR spectrum are presented and discussed.", "title": "" }, { "docid": "4229df2bd2078c6be4464879caccc611", "text": "Galois Field arithmetic forms the basis of Reed-Solomon and other erasure coding techniques to protect storage systems from failures. Most implementations of Galois Field arithmetic rely on multiplication tables or discrete logarithms to perform this operation. However, the advent of 128-bit instructions, such as Intel’s Streaming SIMD Extensions, allows us to perform Galois Field arithmetic much faster. This short paper details how to leverage these instructions for various field sizes, and demonstrates the significant performance improvements on commodity microprocessors. The techniques that we describe are available as open source software.", "title": "" }, { "docid": "5f120ae2429d7b3c8085f96a63eae817", "text": "Background: Antenatal mothers with anemia are high risk to varieties of health implications as well as to their off springs. Many studies show a high mortality and morbidity related to anemia in pregnancy. Methods: This cross-sectional study was designed to determine factors associated with anemia amongst forty seven antenatal mothers attending Antenatal Clinic at Klinik Kesihatan Kuala Besut, Terengganu in November 2009. Systematic random sampling was applied and information gathered based on patients’ medical records and through face-to-face interviewed by using a structured questionnaire. Results: The mean age of respondents was 28.3 year-old. More than half of mothers were multigravidas. Of 47 respondents, 57.4% (95% CI: 43.0, 72.0) was anemic. The proportion of anemia was high for grand multigravidas mother (66.7%), those at third trimester of pregnancy (70.4%), did antenatal booking at first trimester (65.4%), poor haematinic compliance (76.5%), not taking any medication (60.5%), those with no co-morbid illnesses (60.0%), mothers with high education level (71.4%) and those with satisfactory monthly income (61.5%). The proportion of anemia was 58.3% and 57.1% for mothers with last child birth spacing of two years or less and more than two years accordingly. There was a significant association of haematinic compliance with the anemia (OR: 4.571; 95% CI: 1.068, 19.573). Conclusions: Antenatal mothers in this area have a substantial proportion of anemia despite of freely and routinely prescription of haematinic at primary health care centers. Poor haematinic compliance was a significant risk factor. Health education programs regarding haematinic compliance and adequate intake of iron rich diet during pregnancy need to be strengthened to curb this problem. *Corresponding author: NH Nik Rosmawati, Environmental Health Unit, Department of Community Medicine, School of Medical Science, Health Campus, Universiti Sains Malaysia, 16150 Kubang Kerian, Kelantan, Malaysia, E-mail: rosmawati@kk.usm.my Received May 03, 2012; Accepted May 24, 2012; Published May 26, 2012 Citation: Nik Rosmawati NH, Mohd Nazri S, Mohd Ismail I (2012) The Rate and Risk Factors for Anemia among Pregnant Mothers in Jerteh Terengganu, Malaysia. J Community Med Health Educ 2:150. doi:10.4172/2161-0711.1000150 Copyright: © 2012 Nik Rosmawati NH, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "78cfd752153b96de918d6ebf4d6654cd", "text": "Machine learning is an integral technology many people utilize in all areas of human life. It is pervasive in modern living worldwide, and has multiple usages. One application is image classification, embraced across many spheres of influence such as business, finance, medicine, etc. to enhance produces, causes, efficiency, etc. This need for more accurate, detail-oriented classification increases the need for modifications, adaptations, and innovations to Deep Learning Algorithms. This article used Convolutional Neural Networks (CNN) to classify scenes in the CIFAR-10 database, and detect emotions in the KDEF database. The proposed method converted the data to the wavelet domain to attain greater accuracy and comparable efficiency to the spatial domain processing. By dividing image data into subbands, important feature learning occurred over differing low to high frequencies. The combination of the learned low and high frequency features, and processing the fused feature mapping resulted in an advance in the detection accuracy. Comparing the proposed methods to spatial domain CNN and Stacked Denoising Autoencoder (SDA), experimental findings revealed a substantial increase in accuracy.", "title": "" }, { "docid": "796dc233bbf4e9e063485f26ab7b5b64", "text": "Anomaly detection refers to identifying the patterns in data that deviate from expected behavior. These non-conforming patterns are often termed as outliers, malwares, anomalies or exceptions in different application domains. This paper presents a novel, generic real-time distributed anomaly detection framework for multi-source stream data. As a case study, we have decided to detect anomaly for multi-source VMware-based cloud data center. The framework monitors VMware performance stream data (e.g., CPU load, memory usage, etc.) continuously. It collects these data simultaneously from all the VMwares connected to the network. It notifies the resource manager to reschedule its resources dynamically when it identifies any abnormal behavior of its collected data. We have used Apache Spark, a distributed framework for processing performance stream data and making prediction without any delay. Spark is chosen over a traditional distributed framework (e.g., Hadoop and MapReduce, Mahout, etc.) that is not ideal for stream data processing. We have implemented a flat incremental clustering algorithm to model the benign characteristics in our distributed Spark based framework. We have compared the average processing latency of a tuple during clustering and prediction in Spark with Storm, another distributed framework for stream data processing. We experimentally find that Spark processes a tuple much quicker than Storm on average.", "title": "" }, { "docid": "7568cb435d0211248e431d865b6a477e", "text": "We propose prosody embeddings for emotional and expressive speech synthesis networks. The proposed methods introduce temporal structures in the embedding networks, thus enabling fine-grained control of the speaking style of the synthesized speech. The temporal structures can be designed either on the speech side or the text side, leading to different control resolutions in time. The prosody embedding networks are plugged into end-to-end speech synthesis networks and trained without any other supervision except for the target speech for synthesizing. It is demonstrated that the prosody embedding networks learned to extract prosodic features. By adjusting the learned prosody features, we could change the pitch and amplitude of the synthesized speech both at the frame level and the phoneme level. We also introduce the temporal normalization of prosody embeddings, which shows better robustness against speaker perturbations during prosody transfer tasks.", "title": "" } ]
scidocsrr
99d036798fbfe4d1b87b7d6aa11d8577
Why Aren't Operating Systems Getting Faster As Fast as Hardware?
[ { "docid": "ef241b52d4f4fdc892071f684b387242", "text": "A description is given of Sprite, an experimental network operating system under development at the University of California at Berkeley. It is part of a larger research project, SPUR, for the design and construction of a high-performance multiprocessor workstation with special hardware support of Lisp applications. Sprite implements a set of kernel calls that provide sharing, flexibility, and high performance to networked workstations. The discussion covers: the application interface: the basic kernel structure; management of the file name space and file data, virtual memory; and process migration.<<ETX>>", "title": "" } ]
[ { "docid": "ca906d18fca3f4ee83224b7728cbd379", "text": "AIM\nTo investigate the effect of some psychosocial variables on nurses' job satisfaction.\n\n\nBACKGROUND\nNurses' job satisfaction is one of the most important factors in determining individuals' intention to stay or leave a health-care organisation. Literature shows a predictive role of work climate, professional commitment and work values on job satisfaction, but their conjoint effect has rarely been considered.\n\n\nMETHODS\nA cross-sectional questionnaire survey was adopted. Participants were hospital nurses and data were collected in 2011.\n\n\nRESULTS\nProfessional commitment and work climate positively predicted nurses' job satisfaction. The effect of intrinsic vs. extrinsic work value orientation on job satisfaction was completely mediated by professional commitment.\n\n\nCONCLUSIONS\nNurses' job satisfaction is influenced by both contextual and personal variables, in particular work climate and professional commitment. According to a more recent theoretical framework, work climate, work values and professional commitment interact with each other in determining nurses' job satisfaction.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nNursing management must be careful to keep the context of work tuned to individuals' attitude and vice versa. Improving the work climate can have a positive effect on job satisfaction, but its effect may be enhanced by favouring strong professional commitment and by promoting intrinsic more than extrinsic work values.", "title": "" }, { "docid": "98fee7d44d311692677f3ace1e79b045", "text": "Generative adversarial networks (GANs) has achieved great success in the field of image processing, Adversarial Neural Machine Translation(NMT) is the application of GANs to machine translation. Unlike previous work training NMT model through maximizing the likelihood of the human translation, Adversarial NMT minimizes the distinction between human translation and the translation generated by a NMT model. Even though Adversarial NMT has achieved impressive results, while using little in the way of prior knowledge. In this paper, we integrated bilingual dictionaries to Adversarial NMT by leveraging a character model. Extensive experiment shows that our proposed methods can achieve remarkable improvement on the translation quality of Adversarial NMT, and obtain better result than several strong baselines.", "title": "" }, { "docid": "4163070f45dd4d252a21506b1abcfff4", "text": "Nowadays, security solutions are mainly focused on providing security defences, instead of solving one of the main reasons for security problems that refers to an appropriate Information Systems (IS) design. In fact, requirements engineering often neglects enough attention to security concerns. In this paper it will be presented a case study of our proposal, called SREP (Security Requirements Engineering Process), which is a standard-centred process and a reuse-based approach which deals with the security requirements at the earlier stages of software development in a systematic and intuitive way by providing a security resources repository and by integrating the Common Criteria into the software development lifecycle. In brief, a case study is shown in this paper demonstrating how the security requirements for a security critical IS can be obtained in a guided and systematic way by applying SREP.", "title": "" }, { "docid": "da8cdee004db530e262a13e21daf4970", "text": "Arcing between the plasma and the wafer, kit, or target in PVD processes can cause significant wafer damage and foreign material contamination which limits wafer yield. Monitoring the plasma and quickly detecting this arcing phenomena is critical to ensuring that today's PVD processes run optimally and maximize product yield. This is particularly true in 300mm semiconductor manufacturing, where energies used are higher and more product is exposed to the plasma with each wafer run than in similar 200mm semiconductor manufacturing processes.", "title": "" }, { "docid": "ae4974a3d7efedab7cd6651101987e79", "text": "Fisher Kernels and Deep Learning were two developments with significant impact on large-scale object categorization in the last years. Both approaches were shown to achieve state-of-the-art results on large-scale object categorization datasets, such as ImageNet. Conceptually, however, they are perceived as very different and it is not uncommon for heated debates to spring up when advocates of both paradigms meet at conferences or workshops. In this work, we emphasize the similarities between both architectures rather than their differences and we argue that such a unified view allows us to transfer ideas from one domain to the other. As a concrete example we introduce a method for learning a support vector machine classifier with Fisher kernel at the same time as a task-specific data representation. We reinterpret the setting as a multi-layer feed forward network. Its final layer is the classifier, parameterized by a weight vector, and the two previous layers compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture model. We introduce a gradient descent based learning algorithm that, in contrast to other feature learning techniques, is not just derived from intuition or biological analogy, but has a theoretical justification in the framework of statistical learning theory. Our experiments show that the new training procedure leads to significant improvements in classification accuracy while preserving the modularity and geometric interpretability of a support vector machine setup.", "title": "" }, { "docid": "ee15c7152a2e2b9f372ca97283a3c114", "text": "Essential oil (EO) of the leaves of Eugenia uniflora L. (Brazilian cherry tree) was evaluated for its antioxidant, antibacterial and antifungal properties. The acute toxicity of the EO administered by oral route was also evaluated in mice. The EO exhibited antioxidant activity in the DPPH, ABTS and FRAP assays and reduced lipid peroxidation in the kidney of mice. The EO also showed antimicrobial activity against two important pathogenic bacteria, Staphylococcus aureus and Listeria monocytogenes, and against two fungi of the Candida species, C. lipolytica and C. guilliermondii. Acute administration of the EO by the oral route did not cause lethality or toxicological effects in mice. These findings suggest that the EO of the leaves of E. uniflora may have the potential for use in the pharmaceutical industry.", "title": "" }, { "docid": "45447ab4e0a8bd84fcf683ac482f5497", "text": "Most of the current learning analytic techniques have as starting point the data recorded by Learning Management Systems (LMS) about the interactions of the students with the platform and among themselves. But there is a tendency on students to rely less on the functionality offered by the LMS and use more applications that are freely available on the net. This situation is magnified in studies in which students need to interact with a set of tools that are easily installed on their personal computers. This paper shows an approach using Virtual Machines by which a set of events occurring outside of the LMS are recorded and sent to a central server in a scalable and unobtrusive manner.", "title": "" }, { "docid": "b87be040dae4d38538159876e01f310b", "text": "We present data from detailed observations of CityWall, a large multi-touch display installed in a central location in Helsinki, Finland. During eight days of installation, 1199 persons interacted with the system in various social configurations. Videos of these encounters were examined qualitatively as well as quantitatively based on human coding of events. The data convey phenomena that arise uniquely in public use: crowding, massively parallel interaction, teamwork, games, negotiations of transitions and handovers, conflict management, gestures and overt remarks to co-present people, and \"marking\" the display for others. We analyze how public availability is achieved through social learning and negotiation, why interaction becomes performative and, finally, how the display restructures the public space. The multi-touch feature, gesture-based interaction, and the physical display size contributed differentially to these uses. Our findings on the social organization of the use of public displays can be useful for designing such systems for urban environments.", "title": "" }, { "docid": "213ff71ab1c6ac7915f6fb365100c1f5", "text": "Action anticipation and forecasting in videos do not require a hat-trick, as far as there are signs in the context to foresee how actions are going to be deployed. Capturing these signs is hard because the context includes the past. We propose an end-to-end network for action anticipation and forecasting with memory, to both anticipate the current action and foresee the next one. Experiments on action sequence datasets show excellent results indicating that training on histories with a dynamic memory can significantly improve forecasting performance.", "title": "" }, { "docid": "8dc366f9bdcb8ade26c1dc5557c9e3e0", "text": "While the idea that querying mechanisms for complex relationships (otherwise known as Semantic Associations) should be integral to Semantic Web search technologies has recently gained some ground, the issue of how search results will be ranked remains largely unaddressed. Since it is expected that the number of relationships between entities in a knowledge base will be much larger than the number of entities themselves, the likelihood that Semantic Association searches would result in an overwhelming number of results for users is increased, therefore elevating the need for appropriate ranking schemes. Furthermore, it is unlikely that ranking schemes for ranking entities (documents, resources, etc.) may be applied to complex structures such as Semantic Associations.In this paper, we present an approach that ranks results based on how predictable a result might be for users. It is based on a relevance model SemRank, which is a rich blend of semantic and information-theoretic techniques with heuristics that supports the novel idea of modulative searches, where users may vary their search modes to effect changes in the ordering of results depending on their need. We also present the infrastructure used in the SSARK system to support the computation of SemRank values for resulting Semantic Associations and their ordering.", "title": "" }, { "docid": "9df78ef5769ed4da768d1a7b359794ab", "text": "We describe a computer-aided optimization technique for the efficient and reliable design of compact wide-band waveguide septum polarizers (WSP). Wide-band performance is obtained by a global optimization which considers not only the septum section but also several step discontinuities placed before the ridge-to-rectangular bifurcation and the square-to-circular discontinuity. The proposed technique mnakes use of a dynamical optimization procedure which has been tested by designing several WSP operating in different frequency bands. In this work two examples are reported, one operating at Ku band and a very wideband prototype (3.4-4.2 GHz) operating in the C band. The component design, entirely carried out at computer level, has demonstrated significant advantages in terms of development times and no need of post manufacturing adjustments. The very satisfactory agreement between experimental and theoretical results further confirm the validity of the proposed technique.", "title": "" }, { "docid": "81bfa507b8cd849f30c410ba96b0034e", "text": "Augmented reality (AR) makes it possible to create games in which virtual objects are overlaid on the real world, and real objects are tracked and used to control virtual ones. We describe the development of an AR racing game created by modifying an existing racing game, using an AR infrastructure that we developed for use with the XNA game development platform. In our game, the driver wears a tracked video see-through head-worn display, and controls the car with a passive tangible controller. Other players can participate by manipulating waypoints that the car must pass and obstacles with which the car can collide. We discuss our AR infrastructure, which supports the creation of AR applications and games in a managed code environment, the user interface we developed for the AR racing game, the game's software and hardware architecture, and feedback and observations from early demonstrations.", "title": "" }, { "docid": "7d6c87baff95b89d975b98bcf8a132c0", "text": "There is precisely one complete language processing system to date: the human brain. Though there is debate on how much built-in bias human learne rs might have, we definitely acquire language in a primarily unsupervised fashio n. On the other hand, computational approaches to language processing are almost excl usively supervised, relying on hand-labeled corpora for training. This reliance is largel y due to unsupervised approaches having repeatedly exhibited discouraging performance. In particular, the problem of learning syntax (grammar) from completely unannotated text has r eceived a great deal of attention for well over a decade, with little in the way of positive results. We argue that previous methods for this task have generally underperformed becaus of the representations they used. Overly complex models are easily distracted by non-sy ntactic correlations (such as topical associations), while overly simple models aren’t r ich enough to capture important first-order properties of language (such as directionality , adjacency, and valence). In this work, we describe several syntactic representation s and associated probabilistic models which are designed to capture the basic character of natural language syntax as directly as possible. First, we examine a nested, distribut ional method which induces bracketed tree structures. Second, we examine a dependency model which induces word-to-word dependency structures. Finally, we demonstrate that these two models perform better in combination than they do alone. With these representations , high-quality analyses can be learned from surprisingly little text, with no labeled exam ples, in several languages (we show experiments with English, German, and Chinese). Our re sults show above-baseline performance in unsupervised parsing in each of these langua ges. Grammar induction methods are useful since parsed corpora e xist for only a small number of languages. More generally, most high-level NLP tasks , uch as machine translation", "title": "" }, { "docid": "a12422abe3e142b83f5f242dc754cca1", "text": "Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.", "title": "" }, { "docid": "db2b94a49d4907504cf2444305287ec8", "text": "In this paper, we propose a principled Tag Disentangled Generative Adversarial Networks (TDGAN) for re-rendering new images for the object of interest from a single image of it by specifying multiple scene properties (such as viewpoint, illumination, expression, etc.). The whole framework consists of a disentangling network, a generative network, a tag mapping net, and a discriminative network, which are trained jointly based on a given set of images that are completely/partially tagged (i.e., supervised/semi-supervised setting). Given an input image, the disentangling network extracts disentangled and interpretable representations, which are then used to generate images by the generative network. In order to boost the quality of disentangled representations, the tag mapping net is integrated to explore the consistency between the image and its tags. Furthermore, the discriminative network is introduced to implement the adversarial training strategy for generating more realistic images. Experiments on two challenging datasets demonstrate the state-of-the-art performance of the proposed framework in the problem of interest.", "title": "" }, { "docid": "ee4c6084527c6099ea5394aec66ce171", "text": "Gualzru’s path to the Advertisement World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Fernando Fernández, Moisés Mart́ınez, Ismael Garćıa-Varea, Jesús Mart́ınez-Gómez, Jose Pérez-Lorenzo, Raquel Viciana, Pablo Bustos, Luis J. Manso, Luis Calderita, Marco Antonio Gutiérrez Giraldo, Pedro Núñez, Antonio Bandera, Adrián Romero-Garcés, Juan Bandera and Rebeca Marfil", "title": "" }, { "docid": "4fcb316e01885475920f8b91f6a4c00d", "text": "Transportation, as a means for moving goods and people between different locations, is a vital element of modern society. In this paper, we discuss how big data technology infrastructure fits into the current development of China, and provide suggestions for improvement. We discuss the current situation of China's transportation system, and outline relevant big data technologies that are being used in the transportation domain. Finally we point out opportunities for improvement of China's transportation system, through standardisation, integration of big data analytics in a national framework, and point to the future of transportation in China and beyond.", "title": "" }, { "docid": "e5380801d69c3acf7bfe36e868b1dadb", "text": "Skin-mountable chemical sensors using flexible chemically sensitive nanomaterials are of great interest for electronic skin (e-skin) application. To build these sensors, the emerging atomically thin two-dimensional (2D) layered semiconductors could be a good material candidate. Herein, we show that a large-area WS2 film synthesized by sulfurization of a tungsten film exhibits high humidity sensing performance both in natural flat and high mechanical flexible states (bending curvature down to 5 mm). The conductivity of as-synthesized WS2 increases sensitively over a wide relative humidity range (up to 90%) with fast response and recovery times in a few seconds. By using graphene as electrodes and thin polydimethylsiloxane (PDMS) as substrate, a transparent, flexible, and stretchable humidity sensor was fabricated. This senor can be well laminated onto skin and shows stable water moisture sensing behaviors in the undeformed relaxed state as well as under compressive and tensile loadings. Furthermore, its high sensing performance enables real-time monitoring of human breath, indicating a potential mask-free breath monitoring for healthcare application. We believe that such a skin-activity compatible WS2 humidity sensor may shed light on developing low power consumption wearable chemical sensors based on 2D semiconductors.", "title": "" }, { "docid": "ca356c9bec43950b14014ff3cbb6909b", "text": "Microbionic robots are used to access small tedious spaces with high maneuverability. These robots are employed in surveying and inspection of pipelines as well as tracking and examination of animal or human body during surgical activities. Relatively bigger and powerful robots are used for searching people trapped under wreckages and dirt after disasters. In order to achieve high maneuverability and to tackle various critical scenarios, a novel design of multi-segment Vermicular Robot is proposed with an adaptable actuation mechanism. Owing to the 3 Degrees of freedom (Dof) actuation mechanism, it will not only have faster forward motion but its full hemispherical turning capability would allow the robot to sharply steer as well as lift with smaller radii. The Robot will have the capability to simultaneously follow peristaltic motion (elongation/retraction) as well as looper motion (lifting body up/down). The paper presents locomotion patterns of the Vermicular Robot having Canfield actuation mechanism and highlights various scenarios in order to avoid obstacles en-route.", "title": "" }, { "docid": "55ec669a67b88ff0b6b88f1fa6408df9", "text": "This paper proposes low overhead training techniques for a wireless communication system equipped with a Multifunctional Reconfigurable Antenna (MRA) capable of dynamically changing beamwidth and beam directions. A novel microelectromechanical system (MEMS) MRA antenna is presented with radiation patterns (generated using complete electromagnetic full-wave analysis) which are used to quantify the communication link performance gains. In particular, it is shown that using the proposed Exhaustive Training at Reduced Frequency (ETRF) consistently results in a reduction in training overhead. It is also demonstrated that further reduction in training overhead is possible using statistical or MUSIC-based training schemes. Bit Error Rate (BER) and capacity simulations are carried out using an MRA, which can tilt its radiation beam into one of Ndir = 4 or 8 directions with variable beamwidth (≈2π/Ndir). The performance of each training scheme is quantified for OFDM systems operating in frequency selective channels with and without Line of Sight (LoS). We observe 6 dB of gain at BER = 10-4 and 6 dB improvement in capacity (at capacity = 6 bits/sec/subcarrier) are achievable for an MRA with Ndir= 8 as compared to omni directional antennas using ETRF scheme in a LoS environment.", "title": "" } ]
scidocsrr
bb03c123d09243ab35c3b1ff63055106
The wisdom of minority: discovering and targeting the right group of workers for crowdsourcing
[ { "docid": "a2688a1169babed7e35a52fa875505d4", "text": "Crowdsourcing label generation has been a crucial component for many real-world machine learning applications. In this paper, we provide finite-sample exponential bounds on the error rate (in probability and in expectation) of hyperplane binary labeling rules for the Dawid-Skene (and Symmetric DawidSkene ) crowdsourcing model. The bounds can be applied to analyze many commonly used prediction methods, including the majority voting, weighted majority voting and maximum a posteriori (MAP) rules. These bound results can be used to control the error rate and design better algorithms. In particular, under the Symmetric Dawid-Skene model we use simulation to demonstrate that the data-driven EM-MAP rule is a good approximation to the oracle MAP rule which approximately optimizes our upper bound on the mean error rate for any hyperplane binary labeling rule. Meanwhile, the average error rate of the EM-MAP rule is bounded well by the upper bound on the mean error rate of the oracle MAP rule in the simulation.", "title": "" } ]
[ { "docid": "80d2cf93b717582dfaa80d73ff370c4e", "text": "Frameworks and libraries change their APIs. Migrating an application to the new API is tedious and disrupts the development process. Although some tools and ideas have been proposed to solve the evolution of APIs, most updates are done manually. To better understand the requirements for migration tools, we studied the API changes of four frameworks and one library. We discovered that the changes that break existing applications are not random, but tend to fall into particular categories. Over 80% of these changes are refactorings. This suggests that refactoring-based migration tools should be used to update applications.", "title": "" }, { "docid": "b4d9afd6bca3e1dcb5a3f3c69a4c8848", "text": "Internet that covers a large information gives an opportunity to obtain knowledge from it. Internet contains large unstructured and unorganized data such as text, video, and image. Problems arise on how to organize large amount of data and obtain a useful information from it. This information can be used as knowledge in the intelligent computer system. Ontology as one of knowledge representation covers a large area topic. For constructing domain specified ontology, we use large text dataset on Internet and organize it into specified domain before ontology building process is done. We try to implement naive bayes text classifier using map reduce programming model in our research for organizing our large text dataset. In this experiment, we use animal and plant domain article in Wikipedia online encyclopedia as our dataset. Our proposed method can achieve highest accuracy with score about 98.8%. This experiment shows that our proposed method provides a robust system and good accuracy for classifying document into specified domain in preprocessing phase for domain specified ontology building.", "title": "" }, { "docid": "31ab06dc5b41781bf5405e7ead93434e", "text": "In this age of modern era, the use of internet must be maximized. Yeah, internet will help us very much not only for important thing but also for daily activities. Many people now, from any level can use internet. The sources of internet connection can also be enjoyed in many places. As one of the benefits is to get the on-line integration of medical and dental care and patient data book, as the world window, as many people suggest.", "title": "" }, { "docid": "5908245db0e9c28c62e57ae6564e42ab", "text": "The three-dimensional Network-on-Chip (3D NoC) has been proposed to solve the complex on-chip communication issues in multicore systems using die stacking in recent days. Because of the larger power density and the heterogeneous thermal conductance in different silicon layers of 3D NoC, the thermal problems of 3D NoC become more exacerbated than that of 2D NoC and become a major design constraint for a high-performance system. To control the system temperature under a certain thermal limit, many Dynamic Thermal Managements (DTMs) have been proposed. Recently, for emergent cooling, the full throttling scheme is usually employed as the system temperature reaches the alarming level. Hence, the conventional reactive DTM suffers from significant performance impact because of the pessimistic reaction. In this paper, we propose a throttle-based proactive DTM(T-PDTM) scheme to predict the future temperature through a new Thermal RC-based temperature prediction (RCTP) model. The RCTP model can precisely predict the temperature with heterogeneous workload assignment with low constant computational complexity. Based on the predictive temperature, the proposed T-PDTM scheme will assign the suitable clock frequency for each node of the NoC system to perform early temperature control through power budget distribution. Based on the experimental results, compared with the conventional reactive throttled-based DTMs, the T-PDTM scheme can help to reduce 11.4~80.3 percent fully throttled nodes and improves the network throughput by around 1.5~211.8 percent.", "title": "" }, { "docid": "85e6c9bc6f86560e45276df947db48aa", "text": "Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice. We investigate how to optimize existing deep RL algorithms for modern computers, specifically for a combination of CPUs and GPUs. We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm. All neural network computations use GPUs, accelerating both data collection and training. Our results include using an entire DGX-1 to learn successful strategies in Atari games in mere minutes, using both synchronous and asynchronous algorithms.", "title": "" }, { "docid": "5c46291b9a3cab0fb2f9501fff6f6a36", "text": "We discuss the fundamental limits of computing using a new paradigm for quantum computation, cellular automata composed of arrays of Coulombically coupled quantum dot molecules, which we term quantum cellular automata (QCA). Any logical or arithmetic operation can be performed in this scheme. QCA’s provide a valuable concrete example of quantum computation in which a number of fundamental issues come to light. We examine the physics of the computing process in this paradigm. We show to what extent thermodynamic considerations impose limits on the ultimate size of individual QCA arrays. Adiabatic operation of the QCA is examined and the implications for dissipationless computing are explored.", "title": "" }, { "docid": "23390fc169e0863a0f81ded090327c5e", "text": "Although genome-wide association studies (GWASs) have identified numerous loci associated with complex traits, imprecise modeling of the genetic relatedness within study samples may cause substantial inflation of test statistics and possibly spurious associations. Variance component approaches, such as efficient mixed-model association (EMMA), can correct for a wide range of sample structures by explicitly accounting for pairwise relatedness between individuals, using high-density markers to model the phenotype distribution; but such approaches are computationally impractical. We report here a variance component approach implemented in publicly available software, EMMA eXpedited (EMMAX), that reduces the computational time for analyzing large GWAS data sets from years to hours. We apply this method to two human GWAS data sets, performing association analysis for ten quantitative traits from the Northern Finland Birth Cohort and seven common diseases from the Wellcome Trust Case Control Consortium. We find that EMMAX outperforms both principal component analysis and genomic control in correcting for sample structure.", "title": "" }, { "docid": "df3e7333a8eac87bc828bd80e8f72ace", "text": "In this paper, we propose a multimodal biometrics system that combines fingerprint and palmprint features to overcome several limitations of unimodal biometrics—such as the inability to tolerate noise, distorted data and etc.—and thus able to improve the performance of biometrics for personal verification. The quality of fingerprint and palmprint images are first enhanced using a series of pre-processing techniques. Following, a bank of 2D Gabor filters is used to independently extract fingerprint and palmprint features, which are then concatenated into a single feature vector. We conclude that the proposed methodology has better performance and is more reliable compared to unimodal approaches using solely fingerprint or palmprint biometrics. This is supported by our experiments which are able to achieve equal error rate (EER) as low as 0.91% using the combined biometrics features.", "title": "" }, { "docid": "3c8cc4192ee6ddd126e53c8ab242f396", "text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.", "title": "" }, { "docid": "bbf987eef74d76cf2916ae3080a2b174", "text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.", "title": "" }, { "docid": "470a363ba2e5b480e638f372c06bc140", "text": "In this paper, we describe a miniature climbing robot, 96 x 46 x 64 [mm], able to climb ferromagnetic surfaces and to make inner plane to plane transition using only two degrees of freedom. Our robot, named TRIPILLAR, combines magnetic caterpillars and magnets to climb planar ferromagnetic surfaces. Two triangular tracks are mounted in a differential drive mode, which allows squid steering and on spot turning. Exploiting the particular geometry and magnetic properties of this arrangement, TRIPILLAR is able to transit between intersecting surfaces. The intersection angle ranges from -10° to 90° on the pitch angle of the coordinate system of the robot regardless of the orientation of gravity. A possible path is to move from ground to ceiling and back. This achievement opens new avenues for mobile robotics inspection of ferromagnetic industrial structure with stringent size restriction, like the one encountered in power plants.", "title": "" }, { "docid": "edbbd9c7b7dcd063644b8445f07e5365", "text": "This paper describes and discusses the history and state of the art of continuous backbone robot manipulators. Also known as continuum manipulators, these robots, which resemble biological trunks and tentacles, offer capabilities beyond the scope of traditional rigid link manipulators. They are able to adapt their shape to navigate through complex environments and grasp a wide variety of payloads using their compliant backbones. In the paper, we review the current state of knowledge in the field, focusing particularly on kinematic and dynamic models for continuum robots. We discuss the relationships of these robots and their models to their counterparts in conventional rigid-link robots. Ongoing research and future developments in the field are discussed.", "title": "" }, { "docid": "7be63b45354e6f5e29855f7fd5ffbe52", "text": "Current social media research mainly focuses on temporal trends of the information flow and on the topology of the social graph that facilitates the propagation of information. In this paper we study the effect of the content of the idea on the information propagation. We present an efficient hybrid approach based on a linear regression for predicting the spread of an idea in a given time frame. We show that a combination of content features with temporal and topological features minimizes prediction error.\n Our algorithm is evaluated on Twitter hashtags extracted from a dataset of more than 400 million tweets. We analyze the contribution and the limitations of the various feature types to the spread of information, demonstrating that content aspects can be used as strong predictors thus should not be disregarded. We also study the dependencies between global features such as graph topology and content features.", "title": "" }, { "docid": "5f4b0c833e7a542eb294fa2d7a305a16", "text": "Security awareness is an often-overlooked factor in an information security program. While organizations expand their use of advanced security technology and continuously train their security professionals, very little is used to increase the security awareness among the normal users, making them the weakest link in any organization. As a result, today, organized cyber criminals are putting significant efforts to research and develop advanced hacking methods that can be used to steal money and information from the general public. Furthermore, the high internet penetration growth rate in the Middle East and the limited security awareness among users is making it an attractive target for cyber criminals. In this paper, we will show the need for security awareness programs in schools, universities, governments, and private organizations in the Middle East by presenting results of several security awareness studies conducted among students and professionals in UAE in 2010. This includes a comprehensive wireless security survey in which thousands of access points were detected in Dubai and Sharjah most of which are either unprotected or employ weak types of protection. Another study focuses on evaluating the chances of general users to fall victims to phishing attacks which can be used to steal bank and personal information. Furthermore, a study of the user’s awareness of privacy issues when using RFID technology is presented. Finally, we discuss several key factors that are necessary to develop a successful information security awareness program.", "title": "" }, { "docid": "868c64332ae433159a45c1cfbe283341", "text": "The term \"artificial intelligence\" is a buzzword today and is heavily used to market products, services, research, conferences, and more. It is scientifically disputed which types of products and services do actually qualify as \"artificial intelligence\" versus simply advanced computer technologies mimicking aspects of natural intelligence.\n Yet it is undisputed that, despite often inflationary use of the term, there are mainstream products and services today that for decades were only thought to be science fiction. They range from industrial automation, to self-driving cars, robotics, and consumer electronics for smart homes, workspaces, education, and many more contexts.\n Several technological advances enable what is commonly referred to as \"artificial intelligence\". It includes connected computers and the Internet of Things (IoT), open and big data, low cost computing and storage, and many more. Yet regardless of the definition of the term artificial intelligence, technological advancements in this area provide immense potential, especially for people with disabilities.\n In this paper we explore some of these potential in the context of web accessibility. We review some existing products and services, and their support for web accessibility. We propose accessibility conformance evaluation as one potential way forward, to accelerate the uptake of artificial intelligence, to improve web accessibility.", "title": "" }, { "docid": "10f32a4e0671adaee3e18f20592c4619", "text": "This paper presents a novel flexible sliding thigh frame for a gait enhancing mechatronic system. With its two-layered unique structure, the frame is flexible in certain locations and directions, and stiff at certain other locations, so that it can fît well to the wearer's thigh and transmit the assisting torque without joint loading. The paper describes the basic mechanics of this 3D flexible frame and its stiffness characteristics. We implemented the 3D flexible frame on a gait enhancing mechatronic system and conducted experiments. The performance of the proposed mechanism is verified by simulation and experiments.", "title": "" }, { "docid": "b4dd2d96a4919b90e53b520889c40c1c", "text": "The prosperity of Web 2.0 and social media brings about many diverse social networks of unprecedented scales, which present new challenges for more effective graph-mining techniques. In this chapter, we present some graph patterns that are commonly observed in large-scale social networks. As most networks demonstrate strong community structures, one basic task in social network analysis is community detection which uncovers the group membership of actors in a network. We categorize and survey representative graph mining approaches and evaluation strategies for community detection. We then present and discuss some research issues for future exploration.", "title": "" }, { "docid": "2f174828265ace6055f83393d1357c25", "text": "Coplanar wave guide (CPW) inter digital capacitor (IDC) configurations on printed circuit board (PCB) and parametric variations over frequency are studied by simulation using ADS Momentum. The structures are fabricated in printed circuit board using PCB fabrication techniques. The scattering parameters of the structure are measured using vector network analyzer (VNA). The capacitance is estimated in both case using an approximate circuit model and simulation. A Comparative study of the simulation performance with measured results conducted.", "title": "" }, { "docid": "6784b5e03e74b927b37a2bffa4113523", "text": "This paper deals with the study of the statistical properties of the capacity of Nakagami-lognormal (NLN) channels for various fading environments. Specifically, the impact of shadowing and the severity of fading on the channel capacity is investigated. We have derived analytical expressions for the probability density function (PDF), cumulative distribution function (CDF), level-crossing rate (LCR), and average duration of fades (ADF) of the channel capacity. These results are analyzed for different levels of shadowing and for various fading conditions, corresponding to different terrestrial environments. It is observed that the severity of fading and shadowing has a significant influence on the spread and the maximum value of the PDF and LCR of the channel capacity. Moreover, it is also observed that if the fading gets less severe as compared to the Rayleigh fading, the mean channel capacity increases. However, the shadowing effect has no impact on the mean channel capacity. The validity of all analytical results is confirmed by simulations.", "title": "" }, { "docid": "9841491b497a821a86c0d381380d7ce8", "text": "Recently the progress in technology and flourishing applications open up new forecast and defy for the image and video processing community. Compared to still images, video sequences afford more information about how objects and scenarios change over time. Quality of video is very significant before applying it to any kind of processing techniques. This paper deals with two major problems in video processing they are noise reduction and object segmentation on video frames. The segmentation of objects is performed using foreground segmentation based and fuzzy c-means clustering segmentation is compared with the proposed method Improvised fuzzy c – means segmentation based on color. This was applied in the video frame to segment various objects in the current frame. The proposed technique is a powerful method for image segmentation and it works for both single and multiple feature data with spatial information. The experimental result was conducted using various noises and filtering methods to show which is best suited among others and the proposed segmentation approach generates good quality segmented frames.", "title": "" } ]
scidocsrr
83baa1b131c4731cfe9b696ee5bd51b6
Context-Sensitive Generation of Open-Domain Conversational Responses
[ { "docid": "9b9181c7efd28b3e407b5a50f999840a", "text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In", "title": "" } ]
[ { "docid": "302acca2245a0d97cfec06a92dfc1a71", "text": "concepts of tissue-integrated prostheses with remarkable functional advantages, innovations have resulted in dental implant solutions spanning the spectrum of dental needs. Current discussions concerning the relative merit of an implant versus a 3-unit fixed partial denture fully illustrate the possibility that single implants represent a bona fide choice for tooth replacement. Interestingly, when delving into the detailed comparisons between the outcomes of single-tooth implant versus fixed partial dentures or the intentional replacement of a failing tooth with an implant instead of restoration involving root canal therapy, little emphasis has been placed on the relative esthetic merits of one or another therapeutic approach to tooth replacement therapy. An ideal prosthesis should fully recapitulate or enhance the esthetic features of the tooth or teeth it replaces. Although it is clearly beyond the scope of this article to compare the various methods of esthetic tooth replacement, there is, perhaps, sufficient space to share some insights regarding an objective approach to planning, executing and evaluating the esthetic merit of single-tooth implant restorations.", "title": "" }, { "docid": "eee51fc5cd3bee512b01193fa396e19a", "text": "Croston’s method is a widely used to predict inventory demand when it is inter­ mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop­ erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51]", "title": "" }, { "docid": "59af1eb49108e672a35f7c242c5b4683", "text": "“The value concept, more than any other, should occupy a central position . . . able to unify the apparently diverse interests of all the sciences concerned with human behavior.” These words, proclaiming the centrality of the value concept, were written by a psychologist (Rokeach, 1973, p. 3), but similar stands have been taken by sociologists (e.g., Williams, 1968) and anthropologists (e.g., Kluckhohn, 1951). These theorists view values as the criteria people use to select and justify actions and to evaluate people (including the self) and events. We, too, adopt this view of values as criteria rather than as qualities inherent in objects. This article discusses work that is part of a larger project intended to explore the importance of values in a wide variety of contexts. The project addresses three broad questions about values. First, how are the value priorities of individuals affected by their social experiences? That is, how do the common experiences people have, because of their shared locations in the social structure (their education, age, gender, occupation, etc.), influence their value priorities? And, how do individuals’ unique experiences (trauma, relations with parents, immigration, etc.) affect their value priorities? Second, how do the value priorities held by individuals affect their behavioral orientations and choices? That is, how do value priorities influence ideologies, attitudes, and actions in the political, religious, environmental, and other domains?", "title": "" }, { "docid": "b0e5885587ab3796fe1ed0490ddda1bd", "text": "BACKGROUND\nEpicanthal deformity is one of the most frequently encountered cosmetic problems in Asian people. Herein, we introduce a new method for correction of epicanthal folds, which always is performed in combination with double eyelidplasty.\n\n\nMETHODS\nFirst, through upper and lower palpebral margin incisions, we release and excise the connective and orbicularis oculi muscle dense fibres underlying the epicanthal folds, as well as the superficial head of the medial canthal ligament. After repositioning the medial canthus in a double eyelidplastic procedure, we cut off the redundant skin tissue and close the incisions.\n\n\nRESULTS\n82 epicanthoplasties have been performed during the past 2 years. Follow-up time ranged from 1 to 32 months. Postsurgery scars were invisible in all cases. All patients were satisfied with the results. No recurrence of the epicanthal fold was observed.\n\n\nCONCLUSION\nThe new method introduced has advantages in avoiding scar formation and is an especially suitable approach for epicanthoplasty in Asian patients.", "title": "" }, { "docid": "d527daf7ae59c7bcf0989cad3183efbe", "text": "In today’s Web, Web services are created and updated on the fly. It’s already beyond the human ability to analysis them and generate the composition plan manually. A number of approaches have been proposed to tackle that problem. Most of them are inspired by the researches in cross-enterprise workflow and AI planning. This paper gives an overview of recent research efforts of automatic Web service composition both from the workflow and AI planning research community.", "title": "" }, { "docid": "d277a7e6a819af474b31c7a35b9c840f", "text": "Blending face geometry in different expressions is a popular approach for facial animation in films and games. The quality of the animation relies on the set of blend shape expressions, and creating sufficient blend shapes takes a large amount of time and effort. This paper presents a complete pipeline to create a set of blend shapes in different expressions for a face mesh having only a neutral expression. A template blend shapes model having sufficient expressions is provided and the neutral expression of the template mesh model is registered into the target face mesh using a non-rigid ICP (iterative closest point) algorithm. Deformation gradients between the template and target neutral mesh are then transferred to each expression to form a new set of blend shapes for the target face. We solve optimization problem to consistently map the deformation of the source blend shapes to the target face model. The result is a new set of blend shapes for a target mesh having triangle-wise correspondences between the source face and target faces. After creating blend shapes, the blend shape animation of the source face is retargeted to the target mesh automatically.", "title": "" }, { "docid": "37a8ab782f9ed9863ada52d6fbacee99", "text": "Augmented and virtual reality have one big thing in common. They both have the remarkable ability to alter our perception of the world. Virtual reality is able to transpose the user. Augmented reality however, takes our current reality and adds something to it. Hence, the 3D positioning information in Augmented reality system is very important. In our study, the authors prosed the three parts of newly technologies, including the weighted ratio method, the advanced geometric model and the Kalman learning system to improve the accuracy of 3D indoor positioning system, the results show that our proposed method not only can increase the positioning accuracy, but also save the large computation time than the comparing method in Chen's method.", "title": "" }, { "docid": "1959a87bdf9a5bd5f6d234eb998d439b", "text": "Customer journey map used in the service design was considered an important tool to discover service experiences related issues, since it could logically and contextually find problems and solves them. This article tried to conduct the idea of information visualization. and utilized the customer journey map in a three-dimensions format; therefore, designers not only can acquire information from 2D diagram, but also manipulate intangible information from this designed tool. Through 3 service design case studies, this article expects to develop various application tools, which allowing designers explore more opportunities gaps from users’ experiences.", "title": "" }, { "docid": "a5c67537b72e3cd184b43c0a0e7c96b2", "text": "These notes give a short introduction to Gaussian mixture models (GMMs) and the Expectation-Maximization (EM) algorithm, first for the specific case of GMMs, and then more generally. These notes assume you’re familiar with basic probability and basic calculus. If you’re interested in the full derivation (Section 3), some familiarity with entropy and KL divergence is useful but not strictly required. The notation here is borrowed from Introduction to Probability by Bertsekas & Tsitsiklis: random variables are represented with capital letters, values they take are represented with lowercase letters, pX represents a probability distribution for random variable X, and pX(x) represents the probability of value x (according to pX). We’ll also use the shorthand notation X 1 to represent the sequence X1, X2, . . . , Xn, and similarly x n 1 to represent x1, x2, . . . , xn. These notes follow a development somewhat similar to the one in Pattern Recognition and Machine Learning by Bishop.", "title": "" }, { "docid": "3d3e728e5587fe9fd686fca09a6a06f4", "text": "Knowing how to manage one's own learning has become increasingly important in recent years, as both the need and the opportunities for individuals to learn on their own outside of formal classroom settings have grown. During that same period, however, research on learning, memory, and metacognitive processes has provided evidence that people often have a faulty mental model of how they learn and remember, making them prone to both misassessing and mismanaging their own learning. After a discussion of what learners need to understand in order to become effective stewards of their own learning, we first review research on what people believe about how they learn and then review research on how people's ongoing assessments of their own learning are influenced by current performance and the subjective sense of fluency. We conclude with a discussion of societal assumptions and attitudes that can be counterproductive in terms of individuals becoming maximally effective learners.", "title": "" }, { "docid": "f400b94dd5f4d4210bd6873b44697e3a", "text": "A system for monitoring and forecasting urban air pollution is presented in this paper. The system uses low-cost air-quality monitoring motes that are equipped with an array of gaseous and meteorological sensors. These motes wirelessly communicate to an intelligent sensing platform that consists of several modules. The modules are responsible for receiving and storing the data, preprocessing and converting the data into useful information, forecasting the pollutants based on historical information, and finally presenting the acquired information through different channels, such as mobile application, Web portal, and short message service. The focus of this paper is on the monitoring system and its forecasting module. Three machine learning (ML) algorithms are investigated to build accurate forecasting models for one-step and multi-step ahead of concentrations of ground-level ozone (O3), nitrogen dioxide (NO2), and sulfur dioxide (SO2). These ML algorithms are support vector machines, M5P model trees, and artificial neural networks (ANN). Two types of modeling are pursued: 1) univariate and 2) multivariate. The performance evaluation measures used are prediction trend accuracy and root mean square error (RMSE). The results show that using different features in multivariate modeling with M5P algorithm yields the best forecasting performances. For example, using M5P, RMSE is at its lowest, reaching 31.4, when hydrogen sulfide (H2S) is used to predict SO2. Contrarily, the worst performance, i.e., RMSE of 62.4, for SO2 is when using ANN in univariate modeling. The outcome of this paper can be significantly useful for alarming applications in areas with high air pollution levels.", "title": "" }, { "docid": "bfe58868ab05a6ba607ef1f288d37f33", "text": "There is much debate as to whether online offenders are a distinct group of sex offenders or if they are simply typical sex offenders using a new technology. A meta-analysis was conducted to examine the extent to which online and offline offenders differ on demographic and psychological variables. Online offenders were more likely to be Caucasian and were slightly younger than offline offenders. In terms of psychological variables, online offenders had greater victim empathy, greater sexual deviancy, and lower impression management than offline offenders. Both online and offline offenders reported greater rates of childhood physical and sexual abuse than the general population. Additionally, online offenders were more likely to be Caucasian, younger, single, and unemployed compared with the general population. Many of the observed differences can be explained by assuming that online offenders, compared with offline offenders, have greater self-control and more psychological barriers to acting on their deviant interests.", "title": "" }, { "docid": "89c9ad792245fc7f7e7e3b00c1e8147a", "text": "Contrasting hypotheses were posed to test the effect of Facebook exposure on self-esteem. Objective Self-Awareness (OSA) from social psychology and the Hyperpersonal Model from computer-mediated communication were used to argue that Facebook would either diminish or enhance self-esteem respectively. The results revealed that, in contrast to previous work on OSA, becoming self-aware by viewing one's own Facebook profile enhances self-esteem rather than diminishes it. Participants that updated their profiles and viewed their own profiles during the experiment also reported greater self-esteem, which lends additional support to the Hyperpersonal Model. These findings suggest that selective self-presentation in digital media, which leads to intensified relationship formation, also influences impressions of the self.", "title": "" }, { "docid": "ea2d97e8bde8e21b8291c370ce5815bf", "text": "Can the cell's perception of time be expressed through the length of the shortest telomere? To address this question, we analyze an asymmetric random walk that models telomere length for each division that can decrease by a fixed length a or, if recognized by a polymerase, it increases by a fixed length b ≫ a. Our analysis of the model reveals two phases, first, a determinist drift of the length toward a quasi-equilibrium state, and second, persistence of the length near an attracting state for the majority of divisions. The measure of stability of the latter phase is the expected number of divisions at the attractor (\"lifetime\") prior to crossing a threshold T that model senescence. Using numerical simulations, we further study the distribution of times for the shortest telomere to reach the threshold T. We conclude that the telomerase regulates telomere stability by creating an effective potential barrier that separates statistically the arrival time of the shortest from the next shortest to T. The present model explains how random telomere dynamics underlies the extension of cell survival time.", "title": "" }, { "docid": "6381c10a963b709c4af88047f38cc08c", "text": "A great deal of research has been focused on solving the job-shop problem (ΠJ), over the last forty years, resulting in a wide variety of approaches. Recently, much effort has been concentrated on hybrid methods to solve ΠJ as a single technique cannot solve this stubborn problem. As a result much effort has recently been concentrated on techniques that combine myopic problem specific methods and a meta-strategy which guides the search out of local optima. These approaches currently provide the best results. Such hybrid techniques are known as iterated local search algorithms or meta-heuristics. In this paper we seek to assess the work done in the job-shop domain by providing a review of many of the techniques used. The impact of the major contributions is indicated by applying these techniques to a set of standard benchmark problems. It is established that methods such as Tabu Search, Genetic Algorithms, Simulated Annealing should be considered complementary rather than competitive. In addition this work suggests guide-lines on features that should be incorporated to create a good ΠJ system. Finally the possible direction for future work is highlighted so that current barriers within ΠJ maybe surmounted as we approach the 21st Century.", "title": "" }, { "docid": "0576c4553dbfc2bbbe0e0d88afb890b3", "text": "This review covers the toxicology of mercury and its compounds. Special attention is paid to those forms of mercury of current public health concern. Human exposure to the vapor of metallic mercury dates back to antiquity but continues today in occupational settings and from dental amalgam. Health risks from methylmercury in edible tissues of fish have been the subject of several large epidemiological investigations and continue to be the subject of intense debate. Ethylmercury in the form of a preservative, thimerosal, added to certain vaccines, is the most recent form of mercury that has become a public health concern. The review leads to general discussion of evolutionary aspects of mercury, protective and toxic mechanisms, and ends on a note that mercury is still an \"element of mystery.\"", "title": "" }, { "docid": "47dc81932a0ed4c56b945e49c5105c34", "text": "In this paper, the feature selection problem was formulated as a multi-objective optimization problem, and new criteria were proposed to fulfill the goal. Foremost, data were pre-processed with missing value replacement scheme, re-sampling procedure, data type transformation procedure, and min-max normalization procedure. After that a wide variety of classifiers and feature selection methods were conducted and evaluated. Finally, the paper presented comprehensive experiments to show the relative performance of the classification tasks. The experimental results revealed the success of proposed methods in credit approval data. In addition, the numeric results also provide guides in selection of feature selection methods and classifiers in the knowledge discovery process. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ddaf60e511051f3b7e521c4a90f3f9cf", "text": "The objective of this study was to determine the effects of formulation excipients and physical characteristics of inhalation particles on their in vitro aerosolization performance, and thereby to maximize their respirable fraction. Dry powders were produced by spray-drying using excipients that are FDA-approved for inhalation as lactose, materials that are endogenous to the lungs as albumin and dipalmitoylphosphatidylcholine (DPPC); and/or protein stabilizers as trehalose or mannitol. Dry powders suitable for deep lung deposition, i.e. with an aerodynamic diameter of individual particles <3 microm, were prepared. They presented 0.04--0.25 g/cm(3) bulk tap densities, 3--5 microm geometric particle sizes, up to 90% emitted doses and 50% respirable fractions in the Andersen cascade impactor using a Spinhaler inhaler device. The incorporation of lactose, albumin and DPPC in the formulation all improved the aerosolization properties, in contrast to trehalose and the mannitol which decreased powder flowability. The relative proportion of the excipients affected aerosol performance as well. The lower the bulk powder tap density, the higher the respirable fraction. Optimization of in vitro aerosolization properties of inhalation dry powders can be achieved by appropriately selecting composition and physical characteristics of the particles.", "title": "" }, { "docid": "d779fa7abffd94dd83121b29aa367cfe", "text": "The nexus of autonomous vehicle (AV) and electric vehicle (EV) technologies has important potential impacts on our transportation systems, particularly in the case of shared-use vehicles. There are natural synergies between shared AV fleets and EV technology, since fleets of AVs resolve the practical limitations of today’s non-autonomous EVs, including traveler range anxiety, access to charging infrastructure, and charging time management. Fleet-managed AVs relieve such concerns, managing range and charging activities based on real-time trip demand and established charging-station locations, as demonstrated in this paper. This work explores the management of a fleet of shared autonomous (battery-only) electric vehicles (SAEVs) in a regional discrete-time, agent-based model. The simulation examines the operation of SAEVs under various vehicle range and charging infrastructure scenarios in a gridded city modeled roughly after the densities of Austin, Texas. Results indicate that fleet size is sensitive to battery recharge time and vehicle range, with each 80-mile range SAEV replacing 3.7 privately owned vehicles and each 200-mile range SAEV replacing 5.5 privately owned vehicles, under Level II (240-volt AC) charging. With Level III 480-volt DC fast-charging infrastructure in place, these ratios rise to 5.4 vehicles for the 80-mile range SAEV and 6.8 vehicles for the 200-mile range SAEV. SAEVs can serve 96 to 98% of trip requests with average wait times between 7 and 10 minutes per trip. However, due to the need to travel while “empty” for charging and passenger pick-up, SAEV fleets are predicted to generate an additional 7.1 to 14.0% of travel miles. Financial analysis suggests that the combined cost of charging infrastructure, vehicle capital and maintenance, electricity, insurance, and registration for a fleet of SAEVs ranges from $0.42 to $0.49 per occupied mile traveled, which implies SAEV service can be offered at the equivalent per-mile cost of private vehicle ownership for low mileage households, and thus be competitive with current manually-driven carsharing services and significantly cheaper than on-demand driver-operated transportation services. The availability of inductive (wireless) charging infrastructure allows SAEVs to be price-competitive with nonelectric SAVs (when gasoline prices are between $2.18 and $3.50 per gallon). However, charging SAEVs at attendant-operated stations with traditional corded chargers incurs an additional $0.08 per mile compared to wireless charging, and as such would only be price-competitive with SAVs when gasoline reaches $4.35 to $5.70 per gallon.", "title": "" }, { "docid": "f2b13b98556a57b0d9486d628409892a", "text": "Emerging Complex Event Processing (CEP) applications in cyber physical systems like Smart Power Grids present novel challenges for end-to-end analysis over events, flowing from heterogeneous information sources to persistent knowledge repositories. CEP for these applications must support two distinctive features – easy specification patterns over diverse information streams, and integrated pattern detection over realtime and historical events. Existing work on CEP has been limited to relational query patterns, and engines that match events arriving after the query has been registered. We propose SCEPter, a semantic complex event processing framework which uniformly processes queries over continuous and archived events. SCEPteris built around an existing CEP engine with innovative support for semantic event pattern specification and allows their seamless detection over past, present and future events. Specifically, we describe a unified semantic query model that can operate over data flowing through event streams to event repositories. Compile-time and runtime semantic patterns are distinguished and addressed separately for efficiency. Query rewriting is examined and analyzed in the context of temporal boundaries that exist between event streams and their repository to avoid duplicate or missing results. The design and prototype implementation of SCEPterare analyzed using latency and throughput metrics for scenarios from the Smart Grid domain.", "title": "" } ]
scidocsrr
5980e86122b46c47ecb9d1277583bc83
The cognitive benefits of interacting with nature.
[ { "docid": "44ef307640f82994887b011395eba3fc", "text": "An analysis of the underlying similarities between the Eastern meditation tradition and attention restoration theory (ART) provides a basis for an expanded framework for studying directed attention. The focus of the analysis is the active role the individual can play in the preservation and recovery of the directed attention capacity. Two complementary strategies are presented that can help individuals more effectively manage their attentional resource. One strategy involves avoiding unnecessary costs in terms of expenditure of directed attention. The other involves enhancing the effect of restorative opportunities. Both strategies are hypothesized to be more effective if one gains generic knowledge, self-knowledge, and specific skills. The interplay between a more active form of mental involvement and the more passive approach of meditation appears to have far-reaching ramifications for managing directed attention. Research on mental restoration has focused on the role of the environment and especially the natural environment. Such settings have been shown to 480 AUTHOR’S NOTE: This article benefited greatly from the many improvements in organization, expression, and content made by Rachel Kaplan and the many suggestions concerning consistency, clarity, and accuracy made by Terry Hartig. Thanks also to the SESAME group for providing a supportive environment for exploring many of the themes discussed here. The project was funded in part by USDA Forest Service, North Central Experiment Station, Urban Forestry Unit Co-operative Agreements. ENVIRONMENT AND BEHAVIOR, Vol. 33 No. 4, July 2001 480-506 © 2001 Sage Publications © 2001 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. at PENNSYLVANIA STATE UNIV on April 17, 2008 http://eab.sagepub.com Downloaded from reduce both stress and directed attention fatigue (DAF) (Hartig & Evans, 1993). Far less emphasis, however, has been placed on the possibility of active participation by the individual in need of recovery. A major purpose of this article is to explore the potential of this mostly neglected component of the restorative process. The article examines the role of attention in the restoration process both from the perspective of attention restoration theory (ART) and by exploring insights from the meditation tradition. The two perspectives are different in important ways, most notably in the active role that is played by the individual. At the same time, there are interesting common issues. In particular, I explore two ways to bring these frameworks together, namely preserving directed attention by avoiding costs (i.e., things that drain the attentional resource) and recovering attention through enhancement of the restorative process. These lead to a variety of tools and strategies that are available in the quest for restoration and a set of hypotheses concerning their expected effect on an individual’s effectiveness.", "title": "" } ]
[ { "docid": "cfb7fb13adf09f5cb5657ff7f42c41e5", "text": "The antenna design for ultra wideband (UWB) signal radiation is one of the main challenges of the UWB system, especially when low-cost, geometrically small and radio efficient structures are required for typical applications. This study presents a novel printed loop antenna with introducing an L shape portion to its arms. The antenna offers excellent performance for lower-band frequency of UWB system, ranging from 3.1 GHz to 5.1 GHz. The antenna exhibits a 10 dB return loss bandwidth over the entire frequency band. The antenna is designed on FR4 substrate and fed with 50 ohms coupled tapered transmission line. It is found that the lower frequency band depends on the L portion of the loop antenna; however the upper frequency limit was decided by the taper transmission line. Though with very simple geometry, the results are satisfactory.", "title": "" }, { "docid": "ce5c5d0d0cb988c96f0363cfeb9610d4", "text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.", "title": "" }, { "docid": "712636d3a1dfe2650c0568c8f7cf124c", "text": "Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD.", "title": "" }, { "docid": "33b8417f25b56e5ea9944f9f33fc162c", "text": "Researchers have attempted to model information diffusion and topic trends and lifecycle on online social networks. They have investigated the role of content, social connections and communities, familiarity and behavioral similarity in this context. The current article presents a survey of representative models that perform topic analysis, capture information diffusion, and explore the properties of social connections in the context of online social networks. The article concludes with a set of outlines of open problems and possible directions of future research interest. This article is intended for researchers to identify the current literature, and explore possibilities to improve the art.", "title": "" }, { "docid": "ba69ac7c4667eb64e45564e5a5b822d2", "text": "Multi-unit recordings with tetrodes have been used in brain studies for many years, but surprisingly, scarcely in the cerebellum. The cerebellum is subdivided in multiple small functional zones. Understanding the proper features of the cerebellar computations requires a characterization of neuronal activity within each area. By allowing simultaneous recordings of neighboring cells, tetrodes provide a helpful technique to study the dynamics of the cerebellar local networks. Here, we discuss experimental configurations to optimize such recordings and demonstrate their use in the different layers of the cerebellar cortex. We show that tetrodes can also be used to perform simultaneous recordings from neighboring units in freely moving rats using a custom-made drive, thus permitting studies of cerebellar network dynamics in a large variety of behavioral conditions.", "title": "" }, { "docid": "df2c52d659bff75639783332b9bcd571", "text": "The Alt-Right is a neo-fascist white supremacist movement that is involved in violent extremism and shows signs of engagement in extensive disinformation campaigns. Using social media data mining, this study develops a deeper understanding of such targeted disinformation campaigns and the ways they spread. It also adds to the available literature on the endogenous and exogenous influences within the US far right, as well as motivating factors that drive disinformation campaigns, such as geopolitical strategy. This study is to be taken as a preliminary analysis to indicate future methods and follow-on research that will help develop an integrated approach to understanding the strategies and associations of the modern fascist movement.", "title": "" }, { "docid": "3181171d92ce0a8d3a44dba980c0cc5f", "text": "Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.", "title": "" }, { "docid": "06465bde1eb562e90e609a31ed2dfe70", "text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/autumn2016/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices.", "title": "" }, { "docid": "a926341e8b663de6c412b8e3a61ee171", "text": "— Studies within the EHEA framework include the acquisition of skills such as the ability to learn autonomously, which requires students to devote much of their time to individual and group work to reinforce and further complement the knowledge acquired in the classroom. In order to consolidate the results obtained from classroom activities, lecturers must develop tools to encourage learning and facilitate the process of independent learning. The aim of this work is to present the use of virtual laboratories based on Easy Java Simulations to assist in the understanding and testing of electrical machines. con los usuarios integrándose fácilmente en plataformas de e-aprendizaje. Para nuestra aplicación hemos escogido el Java Ejs (Easy Java Simulations), ya que es una herramienta de software gratuita, diseñada para el desarrollo de laboratorios virtuales interactivos, dispone de elementos visuales parametrizables", "title": "" }, { "docid": "2ff15076533d1065209e0e62776eaa69", "text": "In less than a decade, Cubesats have evolved from purely educational tools to a standard platform for technology demonstration and scientific instrumentation. The use of COTS (Commercial-Off-The-Shelf) components and the ongoing miniaturization of several technologies have already led to scattered instances of missions with promising scientific value. Furthermore, advantages in terms of development cost and development time with respect to larger satellites, as well as the possibility of launching several dozens of Cubesats with a single rocket launch, have brought forth the potential for radically new mission architectures consisting of very large constellations or clusters of Cubesats. These architectures promise to combine the temporal resolution of GEO missions with the spatial resolution of LEO missions, thus breaking a traditional tradeoff in Earth observation mission design. This paper assesses the current capabilities of Cubesats with respect to potential employment in Earth observation missions. A thorough review of Cubesat bus technology capabilities is performed, identifying potential limitations and their implications on 17 different Earth observation payload technologies. These results are matched to an exhaustive review of scientific requirements in the field of Earth observation, assessing the possibilities of Cubesats to cope with the requirements set for each one of 21 measurement categories. Based on this review, several Earth observation measurements are identified that can potentially be compatible with the current state-of-the-art of Cubesat technology although some of them have actually never been addressed by any Cubesat mission. Simultaneously, other measurements are identified which are unlikely to be performed by Cubesats in the next few years due to insuperable constraints. Ultimately, this paper is intended to supply a box of ideas for universities to design future Cubesat missions with high", "title": "" }, { "docid": "b8e8404c061350aeba92f6ed1ecea1f1", "text": "We consider a single-product revenue management problem where, given an initial inventory, the objective is to dynamically adjust prices over a finite sales horizon to maximize expected revenues. Realized demand is observed over time, but the underlying functional relationship between price and mean demand rate that governs these observations (otherwise known as the demand function or demand curve) is not known. We consider two instances of this problem: (i) a setting where the demand function is assumed to belong to a known parametric family with unknown parameter values; and (ii) a setting where the demand function is assumed to belong to a broad class of functions that need not admit any parametric representation. In each case we develop policies that learn the demand function “on the fly,” and optimize prices based on that. The performance of these algorithms is measured in terms of the regret: the revenue loss relative to the maximal revenues that can be extracted when the demand function is known prior to the start of the selling season. We derive lower bounds on the regret that hold for any admissible pricing policy, and then show that our proposed algorithms achieve a regret that is “close” to this lower bound. The magnitude of the regret can be interpreted as the economic value of prior knowledge on the demand function, manifested as the revenue loss due to model uncertainty.", "title": "" }, { "docid": "329343cec99c221e6f6ce8e3f1dbe83f", "text": "Artificial Neural Networks (ANN) play a very vital role in making stock market predictions. As per the literature survey, various researchers have used various approaches to predict the prices of stock market. Some popular approaches used by researchers are Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic, Auto Regressive Models and Support Vector Machines. This study presents ANN based computational approach for predicting the one day ahead closing prices of companies from the three different sectors:IT Sector (Wipro, TCS and Infosys), Automobile Sector (Maruti Suzuki Ltd.) and Banking Sector (ICICI Bank). Different types of artificial neural networks based models like Back Propagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), Generalized Regression Neural Network (GRNN) and Layer Recurrent Neural Network (LRNN) have been studied and used to forecast the short term and long term share prices of Wipro, TCS, Infosys, Maruti Suzuki and ICICI Bank. All the networks were trained with the 1100 days of trading data and predicted the prices up to next 6 months. Predicted output was generated through available historical data. Experimental results show that BPNN model gives minimum error (MSE) as compared to the RBFNN and GRNN models. GRNN model performs better as compared to RBFNN model. Forecasting performance of LRNN model is found to be much better than other three models. Keywordsartificial intelligence, back propagation, mean square error, artificial neural network.", "title": "" }, { "docid": "3f6cbad208a819fc8fc6a46208197d59", "text": "The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to the visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both treebased and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-toone viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database as well as the synthesis targets.", "title": "" }, { "docid": "a4ddf6920fa7a5c09fa0f62f9b96a2e3", "text": "In this paper, a class of single-phase Z-source (ZS) ac–ac converters is proposed with high-frequency transformer (HFT) isolation. The proposed HFT isolated (HFTI) ZS ac–ac converters possess all the features of their nonisolated counterparts, such as providing wide range of buck-boost output voltage with reversing or maintaining the phase angle, suppressing the in-rush and harmonic currents, and improved reliability. In addition, the proposed converters incorporate HFT for electrical isolation and safety, and therefore can save an external bulky line frequency transformer, for applications such as dynamic voltage restorers, etc. The proposed HFTI ZS converters are obtained from conventional (nonisolated) ZS ac–ac converters by adding only one extra bidirectional switch, and replacing two inductors with an HFT, thus saving one magnetic core. The switching signals for buck and boost modes are presented with safe-commutation strategy to remove the switch voltage spikes. A quasi-ZS-based HFTI ac–ac is used to discuss the operation principle and circuit analysis of the proposed class of HFTI ZS ac–ac converters. Various ZS-based HFTI proposed ac–ac converters are also presented thereafter. Moreover, a laboratory prototype of the proposed converter is constructed and experiments are conducted to produce output voltage of 110 Vrms / 60 Hz, which verify the operation of the proposed converters.", "title": "" }, { "docid": "de73980005a62a24820ed199fab082a3", "text": "Natural language interfaces offer end-users a familiar and convenient option for querying ontology-based knowledge bases. Several studies have shown that they can achieve high retrieval performance as well as domain independence. This paper focuses on usability and investigates if NLIs are useful from an end-user’s point of view. To that end, we introduce four interfaces each allowing a different query language and present a usability study benchmarking these interfaces. The results of the study reveal a clear preference for full sentences as query language and confirm that NLIs are useful for querying Semantic Web data.", "title": "" }, { "docid": "404bd4b3c7756c87805fa286415aac43", "text": "Although key techniques for next-generation wireless communication have been explored separately, relatively little work has been done to investigate their potential cooperation for performance optimization. To address this problem, we propose a holistic framework for robust 5G communication based on multiple-input-multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM). More specifically, we design a new framework that supports: 1) index modulation based on OFDM (OFDM–M) [1]; 2) sub-band beamforming and channel estimation to achieve massive path gains by exploiting multiple antenna arrays [2]; and 3) sub-band pre-distortion for peak-to-average-power-ratio (PAPR) reduction [3] to significantly decrease the PAPR and communication errors in OFDM-IM by supporting a linear behavior of the power amplifier in the modem. The performance of the proposed framework is evaluated against the state-of-the-art QPSK, OFDM-IM [1] and QPSK-spatiotemporal QPSK-ST [2] schemes. The results show that our framework reduces the bit error rate (BER), mean square error (MSE) and PAPR compared to the baselines by approximately 6–13dB, 8–13dB, and 50%, respectively.", "title": "" }, { "docid": "bc6cbf7da118c01d74914d58a71157ac", "text": "Currently, there are increasing interests in text-to-speech (TTS) synthesis to use sequence-to-sequence models with attention. These models are end-to-end meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech with good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, in this study we propose a transfer learning method to adapt a sequence-to-sequence based TTS system of normal speaking style to Lombard style. Moreover, we experiment with a WaveNet vocoder in synthesis of Lombard speech. We conducted subjective evaluations to assess the performance of the adapted TTS systems. The subjective evaluation results indicated that an adaptation system with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in synthesis of Lombard speech.", "title": "" }, { "docid": "e0d4ab67dc39967b7daa4dc438ef79f5", "text": "Biclustering techniques have been widely used to identify homogeneous subgroups within large data matrices, such as subsets of genes similarly expressed across subsets of patients. Mining a max-sum sub-matrix is a related but distinct problem for which one looks for a (non-necessarily contiguous) rectangular sub-matrix with a maximal sum of its entries. Le Van et al. [6] already illustrated its applicability to gene expression analysis and addressed it with a constraint programming (CP) approach combined with large neighborhood search (CP-LNS). In this work, we exhibit some key properties of this NP-hard problem and define a bounding function such that larger problems can be solved in reasonable time. Two different algorithms are proposed in order to exploit the highlighted characteristics of the problem: a CP approach with a global constraint (CPGC) and mixed integer linear programming (MILP). Practical experiments conducted both on synthetic and real gene expression data exhibit the characteristics of these approaches and their relative benefits over the original CP-LNS method. Overall, the CPGC approach tends to be the fastest to produce a good solution. Yet, the MILP formulation is arguably the easiest to formulate and can also be competitive.", "title": "" }, { "docid": "7233197435b777dcd07a2c66be32dea9", "text": "We present an automated assembly system that directs the actions of a team of heterogeneous robots in the completion of an assembly task. From an initial user-supplied geometric specification, the system applies reasoning about the geometry of individual parts in order to deduce how they fit together. The task is then automatically transformed to a symbolic description of the assembly-a sort of blueprint. A symbolic planner generates an assembly sequence that can be executed by a team of collaborating robots. Each robot fulfills one of two roles: parts delivery or parts assembly. The latter are equipped with specialized tools to aid in the assembly process. Additionally, the robots engage in coordinated co-manipulation of large, heavy assemblies. We provide details of an example furniture kit assembled by the system.", "title": "" }, { "docid": "9fb492c57ef0795a9d71cd94a8ebc8f4", "text": "The increasing reliance on Computational Intelligence techniques like Artificial Neural Networks and Genetic Algorithms to formulate trading decisions have sparked off a chain of research into financial forecasting and trading trend identifications. Many research efforts focused on enhancing predictive capability and identifying turning points. Few actually presented empirical results using live data and actual technical trading rules. This paper proposed a novel RSPOP Intelligent Stock Trading System, that combines the superior predictive capability of RSPOP FNN and the use of widely accepted Moving Average and Relative Strength Indicator Trading Rules. The system is demonstrated empirically using real live stock data to achieve significantly higher Multiplicative Returns than a conventional technical rule trading system. It is able to outperform the buy-and-hold strategy and generate several folds of dollar returns over an investment horizon of four years. The Percentage of Winning Trades was increased significantly from an average of 70% to more than 92% using the system as compared to the conventional trading system; demonstrating the system’s ability to filter out erroneous trading signals generated by technical rules and to preempt any losing trades. The system is designed based on the premise that it is possible to capitalize on the swings in a stock counter’s price, without a need for predicting target prices.", "title": "" } ]
scidocsrr
65abdaa47188bce0bfe62b122f63a543
Automatically Designing CNN Architectures Using Genetic Algorithm for Image Classification
[ { "docid": "6836e08a29fa9aea26284a0ff799019a", "text": "Mastering the game of Go has remained a longstanding challenge to the field of AI. Modern computer Go programs rely on processing millions of possible future positions to play well, but intuitively a stronger and more ‘humanlike’ way to play the game would be to rely on pattern recognition rather than brute force computation. Following this sentiment, we train deep convolutional neural networks to play Go by training them to predict the moves made by expert Go players. To solve this problem we introduce a number of novel techniques, including a method of tying weights in the network to ‘hard code’ symmetries that are expected to exist in the target function, and demonstrate in an ablation study they considerably improve performance. Our final networks are able to achieve move prediction accuracies of 41.1% and 44.4% on two different Go datasets, surpassing previous state of the art on this task by significant margins. Additionally, while previous move prediction systems have not yielded strong Go playing programs, we show that the networks trained in this work acquired high levels of skill. Our convolutional neural networks can consistently defeat the well known Go program GNU Go and win some games against state of the art Go playing program Fuego while using a fraction of the play time.", "title": "" } ]
[ { "docid": "e494bd8d686605cdf10067781a8f36c9", "text": "The purpose of this paper is to examine the role of two basic types of learning in contemporary organizations – incremental (knowledge exploitation) and radical learning (knowledge exploration) – in making organization’s strategic decisions. In achieving this goal a conceptual model of influence of learning types on the nature of strategic decision making and their outcomes was formed, on the basis of which the empirical research was conducted, encompassing 54 top managers in large Croatian companies. The paper discusses the nature of organizational learning and decision making at strategic management level. The results obtained are suggesting that there is a relationship between managers' learning type and decision making approaches at strategic management level, as well as there is the interdependence between these two processes with strategic decision making outcomes. Within these results there are interesting insights, such as that the effect of radical learning on analytical decision making approach is significantly weaker and narrower when compared to the effect of incremental learning on the same approach, and that analytical decision making approach does not affect strategic decision making outcomes.", "title": "" }, { "docid": "dffb192cda5fd68fbea2eb15a6b00434", "text": "For AI systems to reason about real world situations, they need to recognize which processes are at play and which entities play key roles in them. Our goal is to extract this kind of rolebased knowledge about processes, from multiple sentence-level descriptions. This knowledge is hard to acquire; while semantic role labeling (SRL) systems can extract sentence level role information about individual mentions of a process, their results are often noisy and they do not attempt create a globally consistent characterization of a process. To overcome this, we extend standard within sentence joint inference to inference across multiple sentences. This cross sentence inference promotes role assignments that are compatible across different descriptions of the same process. When formulated as an Integer Linear Program, this leads to improvements over within-sentence inference by nearly 3% in F1. The resulting role-based knowledge is of high quality (with a F1 of nearly 82).", "title": "" }, { "docid": "0ff727ff06c02d2e371798ad657153c9", "text": "Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.", "title": "" }, { "docid": "b784ff4a0e4458d19482d6715454f63d", "text": "We address two questions for training a convolutional neural network (CNN) for hyperspectral image classification: i) is it possible to build a pre-trained network? and ii) is the pretraining effective in furthering the performance? To answer the first question, we have devised an approach that pre-trains a network on multiple source datasets that differ in their hyperspectral characteristics and fine-tunes on a target dataset. This approach effectively resolves the architectural issue that arises when transferring meaningful information between the source and the target networks. To answer the second question, we carried out several ablation experiments. Based on the experimental results, a network trained from scratch performs as good as a network fine-tuned from a pre-trained network. However, we observed that pre-training the network has its own advantage in achieving better performances when deeper networks are required.", "title": "" }, { "docid": "5575e96ff51f8cf6526d5bdbe7dd93f5", "text": "The MACS-lift technique, in the simple or extended variation, delivers a reproducible and natural rejuvenation of the face and neck with minimal morbidity and a swift recovery. Retro-auricular extension of the surgery is avoided by pure vertical redraping of the facial skin. In patients with an exceptionally bad skin quality, any vertical pleats below the lobule that may appear can easily be corrected with a limited posterior cervicoplasty.", "title": "" }, { "docid": "aa10bf4f41ca866c8d59d9d703321bd2", "text": "This paper argues against the moral Turing test (MTT) as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions (Allen et al. in J Exp Theor Artif Intell 12(3):251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning (Gerdes and Øhrstrøm in J Inf Commun Ethics Soc 13(2):98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning should be demanded of autonomous systems. We explore the flawed basis of an MTT in imitation, even one based on scenarios of morally accountable actions. MTT-based evaluations are vulnerable to deception, inadequate reasoning, and inferior moral performance vis a vis a system’s capabilities. We propose verification—which demands the design of transparent, accountable processes of reasoning that reliably prefigure the performance of autonomous systems—serves as a superior framework for both designer and system alike. As autonomous social robots in particular take on an increasing range of critical roles within society, we conclude that verification offers an essential, albeit challenging, moral measure of their design and performance.", "title": "" }, { "docid": "abdd1406266d7290166eb16b8a5045a9", "text": "Individualized manufacturing of cars requires kitting: the collection of individual sets of part variants for each car. This challenging logistic task is frequently performed manually by warehouseman. We propose a mobile manipulation robotic system for autonomous kitting, building on the Kuka Miiwa platform which consists of an omnidirectional base, a 7 DoF collaborative iiwa manipulator, cameras, and distance sensors. Software modules for detection and pose estimation of transport boxes, part segmentation in these containers, recognition of part variants, grasp generation, and arm trajectory optimization have been developed and integrated. Our system is designed for collaborative kitting, i.e. some parts are collected by warehouseman while other parts are picked by the robot. To address safe human-robot collaboration, fast arm trajectory replanning considering previously unforeseen obstacles is realized. The developed system was evaluated in the European Robotics Challenge 2, where the Miiwa robot demonstrated autonomous kitting, part variant recognition, and avoidance of unforeseen obstacles.", "title": "" }, { "docid": "19bef93ef428aac9fb25dac2889c4d6a", "text": "This paper presents an efficient unipolar stochastic computing hardware for convolutional neural networks (CNNs). It includes stochastic ReLU and optimized max function, which are key components in a CNN. To avoid the range limitation problem of stochastic numbers and increase the signal-to-noise ratio, we perform weight normalization and upscaling. In addition, to reduce the overhead of binary-to-stochastic conversion, we propose a scheme for sharing stochastic number generators among the neurons in a CNN. Experimental results show that our approach outperforms the previous ones based on stochastic computing in terms of accuracy, area, and energy consumption.", "title": "" }, { "docid": "3d12dea4ae76c5af54578262996fe0bb", "text": "We introduce a two-layer undirected graphical model, calle d a “Replicated Softmax”, that can be used to model and automatically extract low -dimensional latent semantic representations from a large unstructured collec ti n of documents. We present efficient learning and inference algorithms for thi s model, and show how a Monte-Carlo based method, Annealed Importance Sampling, c an be used to produce an accurate estimate of the log-probability the model a ssigns to test data. This allows us to demonstrate that the proposed model is able to g neralize much better compared to Latent Dirichlet Allocation in terms of b th the log-probability of held-out documents and the retrieval accuracy.", "title": "" }, { "docid": "10b1bcf25d8a96c076c32d3c20ecb664", "text": "Barrett’s esophagus (BE) is characterized by a distinct Th2-predominant cytokine profile. However, antigens that shift the immune response toward the Th2 profile are unknown. We examined the effects of rebamipide on the esophageal microbiome and BE development in a rat model. BE was induced by esophagojejunostomy in 8-week-old male Wistar rats. Rats were divided into control and rebamipide-treated group receiving either a normal or a 0.225 % rebamipide-containing diet, respectively, and killed 8, 16, 24, and 32 weeks after the operation. PCR-amplified 16S rDNAs extracted from esophageal samples were examined by terminal-restriction fragment length polymorphism (T-RFLP) analysis to assess microbiome composition. The dynamics of four bacterial genera (Lactobacillus, Clostridium, Streptococcus, and Enterococcus) were analyzed by real-time PCR. The incidences of BE in the control and rebamipide group at 24 and 32 weeks were 80 and 100, and 20 and 33 %, respectively. T-RFLP analysis of normal esophagus revealed that the proportion of Clostridium was 8.3 %, while that of Lactobacillales was 71.8 %. The proportions of Clostridium increased and that of Lactobacillales decreased at 8 weeks in both groups. Such changes were consistently observed in the control but not in the rebamipide group. Clostridium and Lactobacillus expression was lower and higher, respectively, in the rebamipide group than in the control group. Rebamipide reduced BE development and altered the esophageal microbiome composition, which might play a role in BE development.", "title": "" }, { "docid": "b729bb8bc6a9b8dd655b77a7bfc68846", "text": "BACKGROUND\nWe describe our experiences with vaginal vault resection for vaginal recurrence of cervical cancer after hysterectomy and radiotherapy. After operative treatment, the rate of vaginal vault recurrence of uterine cervical cancer is reported to be about 5%. There is no consensus regarding the treatment for these cases.\n\n\nMETHODS\nBetween 2004 and 2012, eight patients with vaginal vault recurrence underwent removal of the vaginal wall via laparotomy after hysterectomy and radiotherapy.\n\n\nRESULTS\nThe median patient age was 45 years (range 35 to 70 years). The median operation time was 244.5 min (range 172 to 590 min), the median estimated blood loss was 362.5 mL (range 49 to 1,890 mL), and the median duration of hospitalization was 24.5 days (range 11 to 50 days). Two patients had intraoperative complications: a grade 1 bowel injury and a grade 1 bladder injury. The following postoperative complications were observed: one patient had vaginal vault bleeding, three patients developed vesicovaginal fistulae, and one patient had repeated ileus. Two patients needed clean intermittent catheterization. Local control was achieved in five of the eight cases.\n\n\nCONCLUSIONS\nVaginal vault resection is an effective treatment for vaginal recurrence of cervical cancer after hysterectomy and radiotherapy. However, complications of this procedure can be expected to reduce quality of life. Therefore, this operation should be selected with great care.", "title": "" }, { "docid": "7983fd76bd7d22c879f3f3469da49d89", "text": "Future driver assistance systems need to be more robust and reliable because these systems will react to increasingly complex situations. This requires increased performance in environment perception sensors and algorithms for detecting other relevant traffic participants and obstacles. An object's existence probability has proven to be a useful measure for determining the quality of an object. This paper presents a novel method for the fusion of the existence probability based on Dempster-Shafer evidence theory in the framework of a highlevel sensor data fusion architecture. The proposed method is able to take into consideration sensor reliability in the fusion process. The existence probability fusion algorithm is evaluated for redundant and partially overlapping sensor configurations.", "title": "" }, { "docid": "d1868eb5bb8d2995e7035058bee58d1e", "text": "All power systems have some inherent level of flexibility— designed to balance supply and demand at all times. Variability and uncertainty are not new to power systems because loads change over time and in sometimes unpredictable ways, and conventional resources fail unexpectedly. Variable renewable energy supply, however, can make this balance harder to achieve. Both wind and solar generation output vary significantly over the course of hours to days, sometimes in a predictable fashion, but often imperfectly forecasted.", "title": "" }, { "docid": "b954475d0fdd33bee90a7313d3a53ff7", "text": "Reddit is a social news website that aims to provide user privacy by encouraging them to use pseudonyms and refraining from any kind of personal data collection. However, users are often not aware of possibilities to indirectly gather a lot of information about them by analyzing their contributions and behaviour on this site. In order to investigate the feasibility of large-scale user classification with respect to the attributes social gender and citizenship this article provides and evaluates several data mining techniques. First, a large text corpus is collected from Reddit and annotations are derived using lexical rules. Then, a discriminative approach on classification using support vector machines is undertaken and extended by using topics generated by a latent Dirichlet allocation as features. Based on supervised latent Dirichlet allocation, a new generative model is drafted and implemented that captures Reddit’s specific structure of organizing information exchange. Finally, the presented techniques for user classification are evaluated and compared in terms of classification performance as well as time efficiency. Our results indicate that large-scale user classification on Reddit is feasible, which may raise privacy concerns among its community.", "title": "" }, { "docid": "0bd7c453279c97333e7ac6c52f7127d8", "text": "Among various biometric modalities, signature verification remains one of the most widely used methods to authenticate the identity of an individual. Signature verification, the most important component of behavioral biometrics, has attracted significant research attention over the last three decades. Despite extensive research, the problem still remains open to research due to the variety of challenges it offers. The high intra-class variations in signatures resulting from different physical or mental states of the signer, the differences that appear with aging and the visual similarity in case of skilled forgeries etc. are only a few of the challenges to name. This paper is intended to provide a review of the recent advancements in offline signature verification with a discussion on different types of forgeries, the features that have been investigated for this problem and the classifiers employed. The pros and cons of notable recent contributions to this problem have also been presented along with a discussion of potential future research directions on this subject.", "title": "" }, { "docid": "6c4b59e0e8cc42faea528dc1fe7a09ed", "text": "Grounded Theory is a powerful research method for collecting and analysing research data. It was ‘discovered’ by Glaser & Strauss (1967) in the 1960s but is still not widely used or understood by researchers in some industries or PhD students in some science disciplines. This paper demonstrates the steps in the method and describes the difficulties encountered in applying Grounded Theory (GT). A fundamental part of the analysis method in GT is the derivation of codes, concepts and categories. Codes and coding are explained and illustrated in Section 3. Merging the codes to discover emerging concepts is a central part of the GT method and is shown in Section 4. Glaser and Strauss’s constant comparison step is applied and illustrated so that the emerging categories can be seen coming from the concepts and leading to the emergent theory grounded in the data in Section 5. However, the initial applications of the GT method did have difficulties. Problems encountered when using the method are described to inform the reader of the realities of the approach. The data used in the illustrative analysis comes from recent IS/IT Case Study research into configuration management (CM) and the use of commercially available computer products (COTS). Why and how the GT approach was appropriate is explained in Section 6. However, the focus is on reporting GT as a research method rather than the results of the Case Study.", "title": "" }, { "docid": "4a677dae1152d5d69369ac76590f52f3", "text": "This paper presents a digital implementation of power control for induction cooking appliances with domestic low-cost vessels. The proposed control strategy is based on the asymmetrical duty-cycle with automatic switching-frequency tracking control employing a digital phase locked-loop (DPLL) control on high performance microcontroller. With the use of a phase locked-loop control, this method ensures the zero voltage switching (ZVS) operation under load parameter variation and power control at any power levels. Experimental results have shown that the proposed control method can reach the minimum output power at 15% of the rated value.", "title": "" }, { "docid": "ae5c21fa28694728aca532a582f612c3", "text": "The purpose of this study was to apply cross-education during 4 wk of unilateral limb immobilization using a shoulder sling and swathe to investigate the effects on muscle strength, muscle size, and muscle activation. Twenty-five right-handed participants were assigned to one of three groups as follows: the Immob + Train group wore a sling and swathe and strength trained (n = 8), the Immob group wore a sling and swathe and did not strength train (n = 8), and the Control group received no treatment (n = 9). Immobilization was applied to the nondominant (left) arm. Strength training consisted of maximal isometric elbow flexion and extension of the dominant (right) arm 3 days/wk. Torque (dynamometer), muscle thickness (ultrasound), maximal voluntary activation (interpolated twitch), and electromyography (EMG) were measured. The change in right biceps and triceps brachii muscle thickness [7.0 ± 1.9 and 7.1 ± 2.2% (SE), respectively] was greater for Immob + Train than Immob (0.4 ± 1.2 and -1.9 ± 1.7%) and Control (0.8 ± 0.5 and 0.0 ± 1.1%, P < 0.05). Left biceps and triceps brachii muscle thickness for Immob + Train (2.2 ± 0.7 and 3.4 ± 2.1%, respectively) was significantly different from Immob (-2.8 ± 1.1 and -5.2 ± 2.7%, respectively, P < 0.05). Right elbow flexion strength for Immob + Train (18.9 ± 5.5%) was significantly different from Immob (-1.6 ± 4.0%, P < 0.05). Right and left elbow extension strength for Immob + Train (68.1 ± 25.9 and 32.2 ± 9.0%, respectively) was significantly different from the respective limb of Immob (1.3 ± 7.7 and -6.1 ± 7.8%) and Control (4.7 ± 4.7 and -0.2 ± 4.5%, P < 0.05). Immobilization in a sling and swathe decreased strength and muscle size but had no effect on maximal voluntary activation or EMG. The cross-education effect on the immobilized limb was greater after elbow extension training. This study suggests that strength training the nonimmobilized limb benefits the immobilized limb for muscle size and strength.", "title": "" }, { "docid": "6b689eeeea41b8b121f16869f663a027", "text": "In recent years, growing attention has been devoted to the conversion of biomass into fuel ethanol, considered the cleanest liquid fuel alternative to fossil fuels. Significant advances have been made towards the technology of ethanol fermentation. This review provides practical examples and gives a broad overview of the current status of ethanol fermentation including biomass resources, microorganisms, and technology. Also, the promising prospects of ethanol fermentation are especially introduced. The prospects included are fermentation technology converting xylose to ethanol, cellulase enzyme utilized in the hydrolysis of lignocellulosic materials, immobilization of the microorganism in large systems, simultaneous saccharification and fermentation, and sugar conversion into ethanol.", "title": "" }, { "docid": "0da779e9ff9881d0af2fa372d6af4655", "text": "In recent years, neural machine translation (NMT) has demonstrated state-of-the-art machine translation (MT) performance. It is a new approach to MT, which tries to learn a set of parameters to maximize the conditional probability of target sentences given source sentences. In this paper, we present a novel approach to improve the translation performance in NMT by conveying topic knowledge during translation. The proposed topic-informed NMT can increase the likelihood of selecting words from the same topic and domain for translation. Experimentally, we demonstrate that topic-informed NMT can achieve a 1.15 (3.3% relative) and 1.67 (5.4% relative) absolute improvement in BLEU score on the Chinese-to-English language pair using NIST 2004 and 2005 test sets, respectively, compared to NMT without topic information.", "title": "" } ]
scidocsrr
e73d3fb4c41aff7363f35f337f8cd70c
Role of Self-Efficacy and Self-Concept Beliefs in Mathematical Problem Solving : A Path Analysis
[ { "docid": "c56c71775a0c87f7bb6c59d6607e5280", "text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.", "title": "" } ]
[ { "docid": "18f60ec99fea3828bdff539c711fc821", "text": "Vision-based road detection is an important research topic in different areas of computer vision such as the autonomous navigation of mobile robots. In outdoor unstructured environments such as villages and deserts, the roads are usually not well-paved and have variant colors or texture distributions. Traditional region- or edge-based approaches, however, are effective only in specific environments, and most of them have weak adaptability to varying road types and appearances. In this paper we describe a novel top-down based hybrid algorithm which properly combines both region and edge cues from the images. The main difference between our proposed algorithm and previous ones is that, before road detection, an off-line scene classifier is efficiently learned by both low- and high-level image cues to predict the unstructured road model. This scene classification can be considered a decision process which guides the selection of the optimal solution from region- or edge-based approaches to detect the road. Moreover, a temporal smoothing mechanism is incorporated, which further makes both model prediction and region classification more stable. Experimental results demonstrate that compared with traditional region- and edge-based algorithms, our algorithm is more robust in detecting the road areas with diverse road types and varying appearances in unstructured conditions.", "title": "" }, { "docid": "e5ed312b0c3aaa26240a9f3aaa2bd36e", "text": "This paper presents PDF-TREX, an heuristic approach for table recognition and extraction from PDF documents.The heuristics starts from an initial set of basic content elements and aligns and groups them, in bottom-up way by considering only their spatial features, in order to identify tabular arrangements of information. The scope of the approach is to recognize tables contained in PDF documents as a 2-dimensional grid on a Cartesian plane and extract them as a set of cells equipped by 2-dimensional coordinates. Experiments, carried out on a dataset composed of tables contained in documents coming from different domains, shows that the approach is well performing in recognizing table cells.The approach aims at improving PDF document annotation and information extraction by providing an output that can be further processed for understanding table and document contents.", "title": "" }, { "docid": "5f94ad6047ec9cf565b9960e89bbc913", "text": "In this paper, we compare the geometrical performance between the rigorous sensor model (RSM) and rational function model (RFM) in the sensor modeling of FORMOSAT-2 satellite images. For the RSM, we provide a least squares collocation procedure to determine the precise orbits. As for the RFM, we analyze the model errors when a large amount of quasi-control points, which are derived from the satellite ephemeris and attitude data, are employed. The model errors with respect to the length of the image strip are also demonstrated. Experimental results show that the RFM is well behaved, indicating that its positioning errors is similar to that of the RSM. Introduction Sensor orientation modeling is a prerequisite for the georeferencing of satellite images or 3D object reconstruction from satellite stereopairs. Nowadays, most of the high-resolution satellites use linear array pushbroom scanners. Based on the pushbroom scanning geometry, a number of investigations have been reported regarding the geometric accuracy of linear array images (Westin, 1990; Chen and Lee, 1993; Li, 1998; Tao et al., 2000; Toutin, 2003; Grodecki and Dial, 2003). The geometric modeling of the sensor orientation may be divided into two categories, namely, the rigorous sensor model (RSM) and the rational function model (RFM) (Toutin, 2004). Capable of fully delineating the imaging geometry between the image space and object space, the RSM has been recognized in providing the most precise geometrical processing of satellite images. Based on the collinearity condition, an image point corresponds to a ground point using the employment of the orientation parameters, which are expressed as a function of the sampling time. Due to the dynamic sampling, the RSM contains many mathematical calculations, which can cause problems for researchers who are not familiar with the data preprocessing. Moreover, with the increasing number of Earth resource satellites, researchers need to familiarize themselves with the uniqueness and complexity of each sensor model. Therefore, a generic sensor model of the geometrical processing is needed for simplification. (Dowman and Michalis, 2003). The RFM is a generalized sensor model that is used as an alternative for the RSM. The model uses a pair of ratios of two polynomials to approximate the collinearity condition equations. The RFM has been successfully applied to several high-resolution satellite images such as Ikonos (Di et al., 2003; Grodecki and Dial, 2003; Fraser and Hanley, 2003) and QuickBird (Robertson, 2003). Due to its simple impleThe Geometrical Comparisons of RSM and RFM for FORMOSAT-2 Satellite Images Liang-Chien Chen, Tee-Ann Teo, and Chien-Liang Liu mentation and standardization (NIMA, 2000), the approach has been widely used in the remote sensing community. Launched on 20 May 2004, FORMOSAT-2 is operated by the National Space Organization of Taiwan. The satellite operates in a sun-synchronous orbit at an altitude of 891 km and with an inclination of 99.1 degrees. It has a swath width of 24 km and orbits the Earth exactly 14 times per day, which makes daily revisits possible (NSPO, 2005). Its panchromatic images have a resolution of 2 meters, while the multispectral sensor produces 8 meter resolution images covering the blue, green, red, and NIR bands. Its high performance provides an excellent data resource for the remote sensing researchers. The major objective of this investigation is to compare the geometrical performances between the RSM and RFM when FORMOSAT-2 images are employed. A least squares collocation-based RSM will also be proposed in the paper. In the reconstruction of the RFM, rational polynomial coefficients are generated by using the on-board ephemeris and attitude data. In addition to the comparison of the two models, the modeling error of the RFM is analyzed when long image strips are used. Rigorous Sensor Models The proposed method comprises essentially of two parts. The first involves the development of the mathematical model for time-dependent orientations. The second performs the least squares collocation to compensate the local systematic errors. Orbit Fitting There are two types of sensor models for pushbroom satellite images, i.e., orbital elements (Westin, 1990) and state vectors (Chen and Chang, 1998). The orbital elements use the Kepler elements as the orbital parameters, while the state vectors calculate the orbital parameters directly by using the position vector. Although both sensor models are robust, the state vector model provides simpler mathematical calculations. For this reason, we select the state vector approach in this investigation. Three steps are included in the orbit modeling: (a) Initialization of the orientation parameters using on-board ephemeris data; (b) Compensation of the systematic errors of the orbital parameters and attitude data via ground control points (GCPs); and (c) Modification of the orbital parameters by using the Least Squares Collocation (Mikhail and Ackermann, 1982) technique. PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING May 2006 573 Center for Space and Remote Sensing Research National Central University, Chung-Li, Taiwan (lcchen@csrsr.ncu.edu.tw). Photogrammetric Engineering & Remote Sensing Vol. 72, No. 5, May 2006, pp. 573–579. 0099-1112/06/7205–0573/$3.00/0 © 2006 American Society for Photogrammetry and Remote Sensing HR-05-016.qxd 4/10/06 2:55 PM Page 573", "title": "" }, { "docid": "58ee9935e8111cf1fe0c09c7f61d7d07", "text": "Distributional reinforcement learning (distributional RL) has seen empirical success in complex Markov Decision Processes (MDPs) in the setting of nonlinear function approximation. However there are many different ways in which one can leverage the distributional approach to reinforcement learning. In this paper, we propose GAN Q-learning, a novel distributional RL method based on generative adversarial networks (GANs) and analyze its performance in simple tabular environments, as well as OpenAI Gym. We empirically show that our algorithm leverages the flexibility and blackbox approach of deep learning models while providing a viable alternative to traditional methods.", "title": "" }, { "docid": "f64711579cc7aaa12bbcda9b5f97f81c", "text": "Face recognition being the most important biometric trait it still faces many challenges, like pose variation, illumination variation etc. When such variations are present in both pose and illumination, all the algorithms are greatly affected by these variations and their performance gets degraded. In this paper we are presenting a detail survey on 2D face recognition under such uncontrolled conditions. Here we have explored different techniques proposed for illumination and pose problem in addition with the classifiers that have been successfully used for face recognition in general. The objective of this review paper is to summarize and compare some of the well-known methods for better understanding of reader.", "title": "" }, { "docid": "76c8deff0bd5e5d8cd521ca093caee1d", "text": "Most mathematics assignments consist of a group of problems requiring the same strategy. For example, a lesson on the quadratic formula is typically followed by a block of problems requiring students to use that formula, which means that students know the appropriate strategy before they read each problem. In an alternative approach, different kinds of problems appear in an interleaved order, which requires students to choose the strategy on the basis of the problem itself. In the classroom-based experiment reported here, grade 7 students (n = 140) received blocked or interleaved practice over a nine-week period, followed two weeks later by an unannounced test. The mean test scores were greater for material learned by interleaved practice rather than by blocked practice (72 % vs. 38 %, d = 1.05). This interleaving effect was observed even though the different kinds of problems were superficially dissimilar from each other, whereas previous interleaved mathematics studies had required students to learn nearly identical kinds of problems. We conclude that interleaving improves mathematics learning not only by improving discrimination between different kinds of problems, but also by strengthening the association between each kind of problem and its corresponding strategy.", "title": "" }, { "docid": "b9eeddd31b3c3aabde8afabd6216eaeb", "text": "The ever-increasing global demand for energy and materials has a pronounced effect on worldwide economic stability, diplomacy, and technical advancement. In response, a recent key research area in biotechnology has centered on the biological conversion of lignocellulosic biomass to simple sugars. Lignocellulosic biomass, converted to fermentable sugars via enzymatic hydrolysis of cell wall polysaccharides, can be utilized to generate a variety of downstream fuels and chemicals. Ethanol, in particular, has a high potential as transportation fuel to supplement or even replace gasoline derived from petroleum feedstocks. Biological or enzymatic hydrolysis offers the potential for low-cost, highyield, and selective production of targeted chemicals and value-added coproducts at milder operating conditions than thermochemical processes such as gasification or pyrolysis. Due to the complex nature of biomass, degrading enzymes, and their interactions, there is a substantial knowledge gap with respect to the mechanism of enzymatic hydrolysis and the relationship between biomass structure and enzymatic performance. This knowledge gap has greatly contributed to the fact that biological conversion of lignocellulosic biomass has not met the target performance and cost requirements for large-scale production and market entrance. This review highlights recent advances in analytical methods to characterize the chemical and molecular features related to the ability of biomass to resist biological deconstruction, defined as biomass recalcitrance. We also briefly discuss the application of some of these methods in a variety of studies that draw attention to relationships between biomass structure, the effectiveness of enzymatic hydrolysis and biomass recalcitrance.", "title": "" }, { "docid": "097da6ee2d13e0b4b2f84a26752574f4", "text": "Objective A sound theoretical foundation to guide practice is enhanced by the ability of nurses to critique research. This article provides a structured route to questioning the methodology of nursing research. Primary Argument Nurses may find critiquing a research paper a particularly daunting experience when faced with their first paper. Knowing what questions the nurse should be asking is perhaps difficult to determine when there may be unfamiliar research terms to grasp. Nurses may benefit from a structured approach which helps them understand the sequence of the text and the subsequent value of a research paper. Conclusion A framework is provided within this article to assist in the analysis of a research paper in a systematic, logical order. The questions presented in the framework may lead the nurse to conclusions about the strengths and weaknesses of the research methods presented in a research article. The framework does not intend to separate quantitative or qualitative paradigms but to assist the nurse in making broad observations about the nature of the research.", "title": "" }, { "docid": "48703205408e6ebd8f8fc357560acc41", "text": "Two experiments found that when asked to perform the physically exerting tasks of clapping and shouting, people exhibit a sizable decrease in individual effort when performing in groups as compared to when they perform alone. This decrease, which we call social loafing, is in addition to losses due to faulty coordination of group efforts. Social loafing is discussed in terms of its experimental generality and theoretical importance. The widespread occurrence, the negative consequences for society, and some conditions that can minimize social loafing are also explored.", "title": "" }, { "docid": "b3fc899c49ceb699f62b43bb0808a1b2", "text": "Social network users publicly share a wide variety of information with their followers and the general public ranging from their opinions, sentiments and personal life activities. There has already been significant advance in analyzing the shared information from both micro (individual user) and macro (community level) perspectives, giving access to actionable insight about user and community behaviors. The identification of personal life events from user’s profiles is a challenging yet important task, which if done appropriately, would facilitate more accurate identification of users’ preferences, interests and attitudes. For instance, a user who has just broken his phone, is likely to be upset and also be looking to purchase a new phone. While there is work that identifies tweets that include mentions of personal life events, our work in this paper goes beyond the state of the art by predicting a future personal life event that a user will be posting about on Twitter solely based on the past tweets. We propose two architectures based on recurrent neural networks, namely the classification and generation architectures, that determine the future personal life event of a user. We evaluate our work based on a gold standard Twitter life event dataset and compare our work with the state of the art baseline technique for life event detection. While presenting performance measures, we also discuss the limitations of our work in this paper.", "title": "" }, { "docid": "1d5e363647bd8018b14abfcc426246bb", "text": "This paper presents a new approach to improve the performance of finger-vein identification systems presented in the literature. The proposed system simultaneously acquires the finger-vein and low-resolution fingerprint images and combines these two evidences using a novel score-level combination strategy. We examine the previously proposed finger-vein identification approaches and develop a new approach that illustrates it superiority over prior published efforts. The utility of low-resolution fingerprint images acquired from a webcam is examined to ascertain the matching performance from such images. We develop and investigate two new score-level combinations, i.e., holistic and nonlinear fusion, and comparatively evaluate them with more popular score-level fusion approaches to ascertain their effectiveness in the proposed system. The rigorous experimental results presented on the database of 6264 images from 156 subjects illustrate significant improvement in the performance, i.e., both from the authentication and recognition experiments.", "title": "" }, { "docid": "ed5932b40de5ca97057247a3549f1f1f", "text": "Numerous pixel design and process improvements have been driven by the need to maintain or improve image quality at an ever smaller pixel size. A major development in imager technology is the mass production of Backside Illumination (BSI) technology for image sensors at low cost and high yield. The Omnivision-TSMC R&D alliance has been focused on the development of a low cost BSI technology which enables high performance at the 1.4μm pixel node. We report the performance of two BSI products both in mass production, using the same BSI 1.4μm pixel design and process. For the 1.4μm pixel, peak quantum efficiencies of 43.8%, 53.6%, 51.6% has been achieved in the red, green and blue channels respectively, with low crosstalk, excellent Gb/Gr performance, no lag, no FPN, 2.3e total read noise, dark current of 27 e/sec at 50°C, and low white pixel defect density. For 1.75μm pixel products, the baseline BSI process has been re-optimized to achieve peak QE of 53%, 60.2%, and 60.4% in the red, green, and blue channels, respectively. This modified process also meets mass production targets for the three 1.75μm BSI products. Our reliability testing has found no reliability issue associated with OmniBSITM architecture.", "title": "" }, { "docid": "e8403145a3d4a8a75348075410683e28", "text": "This paper presents a current-reuse complementary-input (CRCI) telescopic-cascode chopper stabilized amplifier with low-noise low-power operation. The current-reuse complementary-input strategy doubles the amplifier's effective transconductance by full current-reuse between complementary inputs, which significantly improves the noise-power efficiency. A pseudo-resistor based integrator is used in the DC servo loop to generate a high-pass cutoff below 1 Hz. The proposed amplifier features a mid-band gain of 39.25 dB, bandwidth from 0.12 Hz to 7.6 kHz, and draws 2.57 μA from a 1.2-V supply and exhibits an input-referred noise of 3.57 μVrms integrated from 100 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 2.5. The amplifier is designed in 0.13 μm 8-metal CMOS process.", "title": "" }, { "docid": "9dde89f24f55602e21823620b49633dd", "text": "Darier's disease is a rare late-onset genetic disorder of keratinisation. Mosaic forms of the disease characterised by localised and unilateral keratotic papules carrying post-zygotic ATP2A2 mutation in affected areas have been documented. Segmental forms of Darier's disease are classified into two clinical subtypes: type 1 manifesting with distinct lesions on a background of normal appearing skin and type 2 with well-defined areas of Darier's disease occurring on a background of less severe non-mosaic phenotype. Herein we describe two cases of type 1 segmental Darier's disease with favourable response to topical retinoids.", "title": "" }, { "docid": "c35a4278aa4a084d119238fdd68d9eb6", "text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.", "title": "" }, { "docid": "a40727cfa31be91e0ed043826f1507d8", "text": "Deep clustering learns deep feature representations that favor clustering task using neural networks. Some pioneering work proposes to simultaneously learn embedded features and perform clustering by explicitly defining a clustering oriented loss. Though promising performance has been demonstrated in various applications, we observe that a vital ingredient has been overlooked by these work that the defined clustering loss may corrupt feature space, which leads to non-representative meaningless features and this in turn hurts clustering performance. To address this issue, in this paper, we propose the Improved Deep Embedded Clustering (IDEC) algorithm to take care of data structure preservation. Specifically, we manipulate feature space to scatter data points using a clustering loss as guidance. To constrain the manipulation and maintain the local structure of data generating distribution, an under-complete autoencoder is applied. By integrating the clustering loss and autoencoder’s reconstruction loss, IDEC can jointly optimize cluster labels assignment and learn features that are suitable for clustering with local structure preservation. The resultant optimization problem can be effectively solved by mini-batch stochastic gradient descent and backpropagation. Experiments on image and text datasets empirically validate the importance of local structure preservation and the effectiveness of our algorithm.", "title": "" }, { "docid": "f4e02461ec96c60866a37baa96f16f76", "text": "Equilibria in mechanics or in transportation models are not always expressed through a system of equations, but sometimes they are characterized by means of complementarity conditions involving a convex cone. This work deals with the analysis of cone-constrained eigenvalue problems. We discuss some theoretical issues like, for instance, the estimation of the maximal number of eigenvalues in a cone-constrained problem. Special attention is paid to the Paretian case. As a short addition to the theoretical part, we introduce and study two algorithms for solving numerically such type of eigenvalue problems.", "title": "" }, { "docid": "9a4bd291522b19ab4a6848b365e7f546", "text": "This paper reports on modern approaches in Information Extraction (IE) and its two main sub-tasks of Named Entity Recognition (NER) and Relation Extraction (RE). Basic concepts and the most recent approaches in this area are reviewed, which mainly include Machine Learning (ML) based approaches and the more recent trend to Deep Learning (DL)", "title": "" }, { "docid": "866e7819b0389f26daab015c6ff40b69", "text": "This study examined the effects of multiple risk, promotive, and protective factors on three achievement-related measures (i.e., grade point average, number of absences, and math achievement test scores) for African American 7th-grade students (n = 837). There were 3 main findings. First, adolescents had lower grade point averages, more absences, and lower achievement test scores as their exposure to risk factors increased. Second, different promotive and protective factors emerged as significant contributors depending on the nature of the achievement-related outcome that was being assessed. Third, protective factors were identified whose effects were magnified in the presence of multiple risks. Results were discussed in light of the developmental tasks facing adolescents and the contexts in which youth exposed to multiple risks and their families live.", "title": "" }, { "docid": "8ed247a04a8e5ab201807e0d300135a3", "text": "We reproduce the Structurally Constrained Recurrent Network (SCRN) model, and then regularize it using the existing widespread techniques, such as naïve dropout, variational dropout, and weight tying. We show that when regularized and optimized appropriately the SCRN model can achieve performance comparable with the ubiquitous LSTMmodel in language modeling task on English data, while outperforming it on non-English data. Title and Abstract in Russian Воспроизведение и регуляризация SCRN модели Мы воспроизводим структурно ограниченную рекуррентную сеть (SCRN), а затем добавляем регуляризацию, используя существующие широко распространенные методы, такие как исключение (дропаут), вариационное исключение и связка параметров. Мы показываем, что при правильной регуляризации и оптимизации показатели SCRN сопоставимы с показателями вездесущей LSTM в задаче языкового моделирования на английских текстах, а также превосходят их на неанглийских данных.", "title": "" } ]
scidocsrr
b5f01b058ba5f5dd9ce3bde9437ba931
Car detection for autonomous vehicle: LIDAR and vision fusion approach through deep learning framework
[ { "docid": "bc4ddc3ec9959cd6e8ff6713af6f33df", "text": "Object proposals have recently emerged as an essential cornerstone for object detection. The current state-of-theart object detectors employ object proposals to detect objects within a modest set of candidate bounding box proposals instead of exhaustively searching across an image using the sliding window approach. However, achieving high recall and good localization with few proposals is still a challenging problem. The challenge becomes even more difficult in the context of autonomous driving, in which small objects, occlusion, shadows, and reflections usually occur. In this paper, we present a robust object proposals re-ranking algorithm that effectivity re-ranks candidates generated from a customized class-independent 3DOP (3D Object Proposals) method using a two-stream convolutional neural network (CNN). The goal is to ensure that those proposals that accurately cover the desired objects are amongst the few top-ranked candidates. The proposed algorithm, which we call DeepStereoOP, exploits not only RGB images as in the conventional CNN architecture, but also depth features including disparity map and distance to the ground. Experiments show that the proposed algorithm outperforms all existing object proposal algorithms on the challenging KITTI benchmark in terms of both recall and localization. Furthermore, the combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results of all three KITTI object classes.", "title": "" } ]
[ { "docid": "f7b369690fa93420baa7bb43aa75ffec", "text": "Total Quality Management (TQM) and Kaizena continuous change toward betterment are two fundamental concepts directly dealing with continuous improvement of quality of processes and performance of an organization to achieve positive transformation in mindset and action of employees and management. For clear understanding and to get maximum benefit from both of these concepts, as such it becomes mandatory to precisely differentiate between TQM and Kaizen. TQM features primarily focus on customer’s satisfaction through improvement of quality. It is both a top down and bottom up approach whereas kaizen is processes focused and a bottom up approach of small incremental changes. Implementation of TQM is more costly as compared to Kaizen. Through kaizen, improvements are made using organization’s available resources. For the effective implementation of kaizen, the culture of the organization must be supportive and the result of continuous improvement should be communicated to the whole organization for motivation of all employees and for the success of continuous improvement program in the organization. This paper focuses on analyzing the minute differences between TQM and Kaizen. It also discusses the different tools and techniques under the umbrella of kaizen and TQM Philosophy. This paper will elucidate the differences in both these concepts as far as their inherent characteristics and practical implementations are concerned. In spite of differences in methodology, focus and scale of operation in both the concept, it can be simply concluded that Kaizen is one of the Technique of the T QM for continuous improvement of quality, process and performance of the organization. [Muhammad Saleem, Nawar Khan, Shafqat Hameed, M Abbas Ch. An Analysis of Relationship between Total Quality Management and Kaizen. Life Science Journal. 2012;9(3):31-40] (ISSN:1097-8135). http://www.lifesciencesite.com. 5 Key Worlds: Total Quality Management, Kaizen Technique, Continuous Improvement (CI), Tools & Techniques", "title": "" }, { "docid": "5f94ffca0f652a6b3ecd386d5ce73eb7", "text": "Identifying a patient’s important medical problems requires broad and deep medical expertise, as well as significant time to gather all the relevant facts from the patient’s medical record and assess the clinical importance of the facts in reaching the final conclusion. A patient’s medical problem list is by far the most critical information that a physician uses in treatment and care of a patient. In spite of its critical role, its curation, manual or automated, has been an unmet need in clinical practice. We developed a machine learning technique in IBM Watson to automatically generate a patient’s medical problem list. The machine learning model uses lexical and medical features extracted from a patient’s record using NLP techniques. We show that the automated method achieves 70% recall and 67% precision based on the gold standard that medical experts created on a set of deidentified patient records from a major hospital system in the US. To the best of our knowledge this is the first successful machine learning/NLP method of extracting an open-ended patient’s medical problems from an Electronic Medical Record (EMR). This paper also contributes a methodology for assessing accuracy of a medical problem list generation technique.", "title": "" }, { "docid": "71b6f02598ac24efbc4625ca060f1bae", "text": "Estimates of the worldwide incidence and mortality from 27 cancers in 2008 have been prepared for 182 countries as part of the GLOBOCAN series published by the International Agency for Research on Cancer. In this article, we present the results for 20 world regions, summarizing the global patterns for the eight most common cancers. Overall, an estimated 12.7 million new cancer cases and 7.6 million cancer deaths occur in 2008, with 56% of new cancer cases and 63% of the cancer deaths occurring in the less developed regions of the world. The most commonly diagnosed cancers worldwide are lung (1.61 million, 12.7% of the total), breast (1.38 million, 10.9%) and colorectal cancers (1.23 million, 9.7%). The most common causes of cancer death are lung cancer (1.38 million, 18.2% of the total), stomach cancer (738,000 deaths, 9.7%) and liver cancer (696,000 deaths, 9.2%). Cancer is neither rare anywhere in the world, nor mainly confined to high-resource countries. Striking differences in the patterns of cancer from region to region are observed.", "title": "" }, { "docid": "eaa6daff2f28ea7f02861e8c67b9c72b", "text": "The demand of fused magnesium furnaces (FMFs) refers to the average value of the power of the FMFs over a fixed period of time before the current time. The demand is an indicator of the electricity consumption of high energy-consuming FMFs. When the demand exceeds the limit of the Peak Demand (a predetermined maximum demand), the power supply of some FMF will be cut off to ensure that the demand is no more than Peak Demand. But the power cutoff will destroy the heat balance, reduce the quality and yield of the product. The composition change of magnesite in FMFs will cause demand spike occasionally, which a sudden increase in demand exceeds the limit and then drops below the limit. As a result, demand spike cause the power cutoff. In order to avoid the power cutoff at the moment of demand spike, the demand of FMFs needs to be forecasted. This paper analyzes the dynamic model of the demand of FMFs, using the power data, presents a data-driven demand forecasting method. This method consists of the following: PACF based decision module for the number of the input variables of the forecasting model, RBF neural network (RBFNN) based power variation rate forecasting model and demand forecasting model. Simulations based on actual data and industrial experiments at a fused magnesia plant show the effectiveness of the proposed method.", "title": "" }, { "docid": "856e7eeca46eb2c1a27ac0d1b5f0dc0b", "text": "The World Health Organization recommends four antenatal visits for pregnant women in developing countries. Cash transfers have been used to incentivize participation in health services. We examined whether modest cash transfers for participation in antenatal care would increase antenatal care attendance and delivery in a health facility in Kisoro, Uganda. Twenty-three villages were randomized into four groups: 1) no cash; 2) 0.20 United States Dollars (USD) for each of four visits; 3) 0.40 USD for a single first trimester visit only; 4) 0.40 USD for each of four visits. Outcomes were three or more antenatal visits and delivery in a health facility. Chi-square, analysis of variance, and generalized estimating equation analyses were performed to detect differences in outcomes. Women in the 0.40 USD/visit group had higher odds of three or more antenatal visits than the control group (OR 1.70, 95% CI: 1.13-2.57). The odds of delivering in a health facility did not differ between groups. However, women with more antenatal visits had higher odds of delivering in a health facility (OR 1.21, 95% CI: 1.03-1.42). These findings are important in an area where maternal mortality is high, utilization of health services is low, and resources are scarce.", "title": "" }, { "docid": "54fb1578fe2b47980c143f8a352a0c51", "text": "Purpose This paper studies the characteristics of chat messages from analyzing a collection of 33,121 sample messages gathered from 1700 sessions of conversations of 72 pairs of MSN Messenger users over 4-month duration from June to September of 2005. The primary objective of chat message characterization is to understand the properties of chat messages for effective message analysis such as message topic detection. Methodology/Approach From the study on chat message characteristics, an indicative term-based categorization approach for chat topic detection is proposed. In the proposed approach, different techniques such as sessionalization of chat messages and extraction of features from icon texts and URLs are incorporated for message pre-processing. And Näıve Bayes, Associative Classification, and Support Vector Machine are employed as classifiers for categorizing topics from chat sessions. Findings Indicative term-based approach is superior than the traditional document frequency based approach for feature selection in chat topic categorization. Originality/Value This paper studies the characteristics of chat messages and proposes an indicative term-based categorization approach for chat topic detection. The proposed approach has been incorporated into an instant message analysis system for both online and offline chat topic detection. Preprint submitted to Online Information Review 2 May 2006", "title": "" }, { "docid": "c692dd35605c4af62429edef6b80c121", "text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.", "title": "" }, { "docid": "8a0373f7d93106f24ee4c9d33a1a30e0", "text": "OBJECTIVE\nTo evaluate the impact of therapeutic oral doses of stimulants on the brains of ADHD subjects as measured by magnetic resonance imaging (MRI)-based neuroimaging studies (morphometric, functional, spectroscopy).\n\n\nDATA SOURCES\nWe searched PubMed and ScienceDirect through the end of calendar year 2011 using the keywords (1) psychostimulants or methylphenidate or amphetamine, and (2) neuroimaging or MRI or fMRI, and (3) ADHD or ADD or attention-deficit/hyperactivity disorder or attention deficit hyperactivity disorder.\n\n\nSTUDY SELECTION\nWe included only English language articles with new data from case-control or placebo controlled studies that examined attention-deficit/hyperactivity disorder (ADHD) subjects on and off psychostimulants (as well as 5 relevant review articles).\n\n\nDATA EXTRACTION\nWe combined details of study design and medication effects in each imaging modality.\n\n\nRESULTS\nWe found 29 published studies that met our criteria. These included 6 structural MRI, 20 functional MRI studies, and 3 spectroscopy studies. Methods varied widely in terms of design, analytic technique, and regions of the brain investigated. Despite heterogeneity in methods, however, results were consistent. With only a few exceptions, the data on the effect of therapeutic oral doses of stimulant medication suggest attenuation of structural and functional alterations found in unmedicated ADHD subjects relative to findings in controls.\n\n\nCONCLUSIONS\nDespite the inherent limitations and heterogeneity of the extant MRI literature, our review suggests that therapeutic oral doses of stimulants decrease alterations in brain structure and function in subjects with ADHD relative to unmedicated subjects and controls. These medication-associated brain effects parallel, and may underlie, the well-established clinical benefits.", "title": "" }, { "docid": "6674065e17d2c6390293fd028aa9b964", "text": "Mobile robots and autonomous vehicles rely on multi-modal sensor setups to perceive and understand their surroundings. Aside from cameras, LiDAR sensors represent a central component of state-of-theart perception systems. In addition to accurate spatial perception, a comprehensive semantic understanding of the environment is essential for efficient and safe operation. In this paper we present a novel deep neural network architecture called LiLaNet for point-wise, multi-class semantic labeling of semi-dense LiDAR data. The network utilizes virtual image projections of the 3D point clouds for efficient inference. Further, we propose an automated process for large-scale cross-modal training data generation called Autolabeling, in order to boost semantic labeling performance while keeping the manual annotation effort low. The effectiveness of the proposed network architecture as well as the automated data generation process is demonstrated on a manually annotated ground truth dataset. LiLaNet is shown to significantly outperform current state-ofthe-art CNN architectures for LiDAR data. Applying our automatically generated large-scale training data yields a boost of up to 14 percentage points compared to networks trained on manually annotated data only.", "title": "" }, { "docid": "dbc57902c0655f1bdb3f7dbdcdb6fd5c", "text": "In this paper, a progressive learning technique for multi-class classification is proposed. This newly developed learning technique is independent of the number of class constraints and it can learn new classes while still retaining the knowledge of previous classes. Whenever a new class (non-native to the knowledge learnt thus far) is encountered, the neural network structure gets remodeled automatically by facilitating new neurons and interconnections, and the parameters are calculated in such a way that it retains the knowledge learnt thus far. This technique is suitable for realworld applications where the number of classes is often unknown and online learning from real-time data is required. The consistency and the complexity of the progressive learning technique are analyzed. Several standard datasets are used to evaluate the performance of the developed technique. A comparative study shows that the developed technique is superior. Key Words—Classification, machine learning, multi-class, sequential learning, progressive learning.", "title": "" }, { "docid": "d2704ce4ce7ccdb016d0d3698bcf1037", "text": "BACKGROUND & OBJECTIVES\nThe present study aimed to systematically quantify the well known risk of severe dengue during secondary infection in literature and to understand how epidemiological mechanisms of enhancement during the secondary infection influence the empirically estimated risk of severe dengue by means of mathematical modeling.\n\n\nMETHODS\nTwo conditional risks of severe dengue, i.e. symptomatic illness and dengue hemorrhagic fever (DHF) or dengue shock syndrome (DSS), given secondary infection were explored based on systematically searched prospective studies. A two-strain epidemiological model was employed to simulate the transmission dynamics of dengue and to identify the relevant data gaps in empirical observations.\n\n\nRESULTS\nUsing the variance-based weighting, the pooled relative risk (RR) of symptomatic illness during secondary infection was estimated at 9.4 [95% confidence interval (CI): 6.1-14.4], and similarly, RR of DHF/DSS was estimated to be 23.7 (95% CI: 15.3-36.9). A variation in the RR of DHF/DSS was observed among prospective studies. Using the mathematical modeling technique, we identified the duration of cross-protective immunity as an important modulator of the time-dependent behaviour of the RR of severe dengue. Different epidemiological mechanisms of enhancement during secondary infection yielded different RR of severe dengue.\n\n\nINTERPRETATION & CONCLUSION\nOptimal design of prospective cohort study for dengue should be considered, accounting for the time-dependence in the RR during the course of dengue epidemic. It is critical to statistically infer the duration of cross-protective immunity and clarify how the enhancement influences the epidemiological dynamics during secondary infection.", "title": "" }, { "docid": "b02f5af836c0d18933de091044ccb916", "text": "This research presents a mobile augmented reality (MAR) travel guide, named CorfuAR, which supports personalized recommendations. We report the development process and devise a theoretical model that explores the adoption of MAR applications through their emotional impact. A field study on Corfu visitors (n=105) shows that the functional properties of CorfuAR evoke feelings of pleasure and arousal, which, in turn, influence the behavioral intention of using it. This is the first study that empirically validates the relation between functional system properties, user emotions, and adoption behavior. The paper discusses also the theoretical and managerial implications of our study.", "title": "" }, { "docid": "d5ddc141311afb6050a58be88303b577", "text": "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster RCNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.", "title": "" }, { "docid": "ba7b51dc253da1a17aaf12becb1abfed", "text": "This papers aims to design a new approach in order to increase the performance of the decision making in model-based fault diagnosis when signature vectors of various faults are identical or closed. The proposed approach consists on taking into account the knowledge issued from the reliability analysis and the model-based fault diagnosis. The decision making, formalised as a bayesian network, is established with a priori knowledge on the dynamic component degradation through Markov chains. The effectiveness and performances of the technique are illustrated on a heating water process corrupted by faults. Copyright © 2006 IFAC", "title": "" }, { "docid": "ca1b189815ce5eb56c2b44e2c0c154aa", "text": "Synthetic data sets can be useful in a variety of situations, including repeatable regression testing and providing realistic - but not real - data to third parties for testing new software. Researchers, engineers, and software developers can test against a safe data set without affecting or even accessing the original data, insulating them from privacy and security concerns as well as letting them generate larger data sets than would be available using only real data. Practitioners use data mining technology to discover patterns in real data sets that aren't apparent at the outset. This article explores how to combine information derived from data mining applications with the descriptive ability of synthetic data generation software. Our goal is to demonstrate that at least some data mining techniques (in particular, a decision tree) can discover patterns that we can then use to inverse map into synthetic data sets. These synthetic data sets can be of any size and will faithfully exhibit the same (decision tree) patterns. Our work builds on two technologies: synthetic data definition language and predictive model markup language.", "title": "" }, { "docid": "cc56bbfe498556acb317fd325d750cf9", "text": "The goal of the current work is to evaluate semantic feature aggregation techniques in a task of gender classification of public social media texts in Russian. We collect Facebook posts of Russian-speaking users and apply them as a dataset for two topic modelling techniques and a distributional clustering approach. The output of the algorithms is applied as a feature aggregation method in a task of gender classification based on a smaller Facebook sample. The classification performance of the best model is favorably compared against the lemmas baseline and the state-of-the-art results reported for a different genre or language. The resulting successful features are exemplified, and the difference between the three techniques in terms of classification performance and feature contents are discussed, with the best technique clearly outperforming the others.", "title": "" }, { "docid": "830cee097cca9330a5099591f4d88825", "text": "Recent studies of knowledge representation attempt to project both entities and relations, which originally compose a high-dimensional and sparse knowledge graph, into a continuous low-dimensional space. One canonical approach TransE (Bordes et al., 2013) which represents entities and relations with vectors (embeddings), achieves leading performances solely with triplets, i.e. (head entity, relation, tail entity), in a knowledge base. The cutting-edge method DKRL (Xie et al., 2016) extends TransE via enhancing the embeddings with entity descriptions by means of deep neural network models. However, DKRL requires extra space to store parameters of inner layers, and relies on more hyperparameters to be tuned. Therefore, we create a single-layer model which requests much fewer parameters. The model measures the probability of each triplet along with corresponding entity descriptions, and learns contextual embeddings of entities, relations and words in descriptions simultaneously, via maximizing the loglikelihood of the observed knowledge. We evaluate our model in the tasks of knowledge graph completion and entity type classification with two benchmark datasets: FB500K and EN15K, respectively. Experimental results demonstrate that the proposed model outperforms both TransE and DKRL, indicating that it is both efficient and effective in learning better distributed representations for knowledge bases. c © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8fa721c98dac13157bcc891c06561ec7", "text": "Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total child-care is not yet being promoted, there are indications that it is „on the cards‟. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care.", "title": "" }, { "docid": "60f6d9303508494bcff9231266e490ad", "text": "I n the January 2001 issue of Computer (pp. 135-137), we published the Software Defect Reduction Top 10 List—one of two foci pursued by the National Science Foundation-sponsored Center for Empirically Based Software Engineering (CeBASE). COTS-based systems (CBS) provide the other CeBASE focus. For our intent, COTS software has the following characteristics: The buyer has no access to the source code; the vendor controls its development; and it has a nontrivial installed base (that is, more than one customer ; more than a few copies). Criteria for making the list are that each empirical result has • significant current and future impact on software dependability, timeli-ness, and cost; • diagnostic value with respect to cost-effective best practices; and • reasonable generality across applications domains, market sectors, and product sizes. These are the same criteria we used for our defect-reduction list, but they are harder to evaluate for CBS because it is a less mature area. CBS's roller-coaster ride along Gartner Group's visibility-maturity curve (http:// gartner11.gartnerweb.com/public/static/ hotc/hc00094769.html) reveals its relative immaturity as it progresses through a peak of inflated expectations (with many overenthusiastic organizational mandates to switch to CBS), a trough of disillusion-ment, and to a slope of enlightenment, to a plateau of productivity. We present the CBS Top 10 List as hypotheses, rather than results, that also serve as software challenges for enhancing our empirical understanding of CBS. More than 99 percent of all executing computer instructions come from COTS products. Each instruction passed a market test for value. • Source. The more than 99 percent figure derives from analyzing Department of Defense data (B. Boehm, \" Managing Software Productivity and Reuse, \" Computer, Sept. 1999, pp. 111-113). • Implications. Economic necessity drives extensive COTS use. Nobody can afford to write a general-purpose operating system or database management system. Every project should consider the CBS option, but carefully weigh CBS benefits, costs, and risks against other options. \" Market test \" means that someone willingly pays to install the COTS component, not that every instruction is used or proves valuable. More than half the features in large COTS software products go unused. working alone used 12 to 16 percent of Microsoft Word and PowerPoint measurement features, whereas a 10-person group used 26 to 29 percent of these features. • Implications. Adding features is an economic necessity for vendors but it introduces complexity for COTS adopters. This added complexity can require …", "title": "" }, { "docid": "9e3a7af7b8773f43ba32d30f3610af40", "text": "Several attempts to enhance statistical parametric speech synthesis have contemplated deep-learning-based postfil-ters, which learn to perform a mapping of the synthetic speech parameters to the natural ones, reducing the gap between them. In this paper, we introduce a new pre-training approach for neural networks, applied in LSTM-based postfilters for speech synthesis, with the objective of enhancing the quality of the synthesized speech in a more efficient manner. Our approach begins with an auto-regressive training of one LSTM network, whose is used as an initialization for postfilters based on a denoising autoencoder architecture. We show the advantages of this initialization on a set of multi-stream postfilters, which encompass a collection of denoising autoencoders for the set of MFCC and fundamental frequency parameters of the artificial voice. Results show that the initialization succeeds in lowering the training time of the LSTM networks and achieves better results in enhancing the statistical parametric speech in most cases, when compared to the common random-initialized approach of the networks.", "title": "" } ]
scidocsrr
2cb43031d9050f9455ec1a90dbc5dc0c
Object Reading: Text Recognition for Object Recognition
[ { "docid": "58e3444f3d35d0ad45e5637e7c53efb5", "text": "An efficient method for text localization and recognition in real-world images is proposed. Thanks to effective pruning, it is able to exhaustively search the space of all character sequences in real time (200ms on a 640x480 image). The method exploits higher-order properties of text such as word text lines. We demonstrate that the grouping stage plays a key role in the text localization performance and that a robust and precise grouping stage is able to compensate errors of the character detector. The method includes a novel selector of Maximally Stable Extremal Regions (MSER) which exploits region topology. Experimental validation shows that 95.7% characters in the ICDAR dataset are detected using the novel selector of MSERs with a low sensitivity threshold. The proposed method was evaluated on the standard ICDAR 2003 dataset where it achieved state-of-the-art results in both text localization and recognition.", "title": "" } ]
[ { "docid": "338dcbb45ff0c1752eeb34ec1be1babe", "text": "I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural", "title": "" }, { "docid": "cbdace4636017f925b89ecf266fde019", "text": "It is traditionally known that wideband apertures lose bandwidth when placed over a ground plane. To overcome this issue, this paper introduces a new non-symmetric tightly coupled dipole element for wideband phased arrays. The proposed array antenna incorporates additional degrees of freedom to control capacitance and cancel the ground plane inductance. Specifically, each arm on the dipole is different than the other (or non-symmetric). The arms are identical near the center feed section but dissimilar towards the ends, forming a ball-and-cup. It is demonstrated that the non-symmetric qualities achieve wideband performance. Concurrently, a design example for planar installation with balun and matching network is presented to cover X-band. The balun avoids extraneous radiation, maintains the array's low-profile height and is printed on top of the ground plane connecting to the array aperture with 180° out of phase vertical twin-wire transmission lines. To demonstrate the concept, a 64-element array with integrated feed and matching network is designed, fabricated and verified experimentally. The array aperture is placed λ/7 (at 8 GHz) above the ground plane and shown to maintain a active VSWR less than 2 from 8-12.5 GHz while scanning up to 70° and 60° in E- and H-plane, respectively. The array's simulated diagonal plane cross-polarization is approximately 10 dB below the co-polarized component during 60° diagonal scan and follows the theoretical limit for an infinite current sheet.", "title": "" }, { "docid": "41468ef8950c372586485725478c80db", "text": "Sobolevicanthus transvaalensis n.sp. is described from the Cape Teal, Anas capensis Gmelin, 1789, collected in the Republic of South Africa. The new species possesses 8 skrjabinoid hooks 78–88 μm long (mean 85 μm) and a short claviform cirrus-sac 79–143 μm long and resembles S. javanensis (Davis, 1945) and S. terraereginae (Johnston, 1913). It can be distinguished from S. javanensis by its shorter cirrus-sac and smaller cirrus diameter, and by differences in the morphology of the accessory sac and vagina and in their position relative to the cirrus-sac. It can be separated from S. terraereginae on the basis of cirrus length and diameter. The basal diameter of the cirrus in S. terraereginae is three times that in S. transvaalensis. ac]19830414", "title": "" }, { "docid": "f14e128c17a95e8f549f822dad408133", "text": "Capparis Spinosa L. is an aromatic plant growing wild in dry regions around the Mediterranean basin. Capparis Spinosa was shown to possess several properties such as antioxidant, antifungal, and anti-hepatotoxic actions. In this work, we aimed to evaluate immunomodulatory properties of Capparis Spinosa leaf extracts in vitro on human peripheral blood mononuclear cells (PBMCs) from healthy individuals. Using MTT assay, we identified a range of Capparis Spinosa doses, which were not toxic. Unexpectedly, we found out that Capparis Spinosa aqueous fraction exhibited an increase in cell metabolic activity, even though similar doses did not affect cell proliferation as shown by CFSE. Interestingly, Capparis Spinosa aqueous fraction appeared to induce an overall anti-inflammatory response through significant inhibition of IL-17 and induction of IL-4 gene expression when PBMCs were treated with the non toxic doses of 100 and/or 500 μg/ml. Phytoscreening analysis of the used Capparis Spinosa preparations showed that these contain tannins; sterols, alkaloids; polyphenols and flavonoids. Surprisingly, quantification assays showed that our Capparis Spinosa preparation contains low amounts of polyphenols relative to Capparis Spinosa used in other studies. This Capparis Spinosa also appeared to act as a weaker scavenging free radical agent as evidenced by DPPH radical scavenging test. Finally, polyphenolic compounds including catechin, caffeic acid, syringic acid, rutin and ferulic acid were identified by HPLC, in the Capparis spinosa preparation. Altogether, these findings suggest that our Capparis Spinosa preparation contains interesting compounds, which could be used to suppress IL-17 and to enhance IL-4 gene expression in certain inflammatory situations. Other studies are underway in order to identify the compound(s) underlying this effect.", "title": "" }, { "docid": "0203b3995c21e5e7026fe787eaef6e09", "text": "Pose SLAM is the variant of simultaneous localization and map building (SLAM) is the variant of SLAM, in which only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. To reduce the computational cost of the information filter form of Pose SLAM and, at the same time, to delay inconsistency as much as possible, we introduce an approach that takes into account only highly informative loop-closure links and nonredundant poses. This approach includes constant time procedures to compute the distance between poses, the expected information gain for each potential link, and the exact marginal covariances while moving in open loop, as well as a procedure to recover the state after a loop closure that, in practical situations, scales linearly in terms of both time and memory. Using these procedures, the robot operates most of the time in open loop, and the cost of the loop closure is amortized over long trajectories. This way, the computational bottleneck shifts to data association, which is the search over the set of previously visited poses to determine good candidates for sensor registration. To speed up data association, we introduce a method to search for neighboring poses whose complexity ranges from logarithmic in the usual case to linear in degenerate situations. The method is based on organizing the pose information in a balanced tree whose internal levels are defined using interval arithmetic. The proposed Pose-SLAM approach is validated through simulations, real mapping sessions, and experiments using standard SLAM data sets.", "title": "" }, { "docid": "71c4e6e63eaeec06b5e8690c1a915c81", "text": "Measuring the similarity between words, sentences, paragraphs and documents is an important component in various tasks such as information retrieval, document clustering, word-sense disambiguation, automatic essay scoring, short answer grading, machine translation and text summarization. This survey discusses the existing works on text similarity through partitioning them into three approaches; String-based, Corpus-based and Knowledge-based similarities. Furthermore, samples of combination between these similarities are presented.", "title": "" }, { "docid": "81f4d9d472f9098eb27bd667f82fb42c", "text": "The Holy Quran is the reference book for more than 1.6 billion of Muslims all around the world Extracting information and knowledge from the Holy Quran is of high benefit for both specialized people in Islamic studies as well as non-specialized people. This paper initiates a series of research studies that aim to serve the Holy Quran and provide helpful and accurate information and knowledge to the all human beings. Also, the planned research studies aim to lay out a framework that will be used by researchers in the field of Arabic natural language processing by providing a ”Golden Dataset” along with useful techniques and information that will advance this field further. The aim of this paper is to find an approach for analyzing Arabic text and then providing statistical information which might be helpful for the people in this research area. In this paper the holly Quran text is preprocessed and then different text mining operations are applied to it to reveal simple facts about the terms of the holy Quran. The results show a variety of characteristics of the Holy Quran such as its most important words, its wordcloud and chapters with high term frequencies. All these results are based on term frequencies that are calculated using both Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) methods. Keywords—Holy Quran; Text Mining; Arabic Natural Language Processing", "title": "" }, { "docid": "fe1bc993047a95102f4331f57b1f9197", "text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.", "title": "" }, { "docid": "ac2c1d325a242cc0037474d2a51c2b70", "text": "In female mammals, one of the two X chromosomes is silenced for dosage compensation between the sexes. X-chromosome inactivation is initiated in early embryogenesis by the Xist RNA that localizes to the inactive X chromosome. During development, the inactive X chromosome is further modified, a specialized form of facultative heterochromatin is formed and gene repression becomes stable and independent of Xist in somatic cells. The recent identification of several factors involved in this process has provided insights into the mechanism of Xist localization and gene silencing. The emerging picture is complex and suggests that chromosome-wide silencing can be partitioned into several steps, the molecular components of which are starting to be defined.", "title": "" }, { "docid": "b88a79221efb5afc717cb2f97761271d", "text": "BACKGROUND\nLymphangitic streaking, characterized by linear erythema on the skin, is most commonly observed in the setting of bacterial infection. However, a number of nonbacterial causes can result in lymphangitic streaking. We sought to elucidate the nonbacterial causes of lymphangitic streaking that may mimic bacterial infection to broaden clinicians' differential diagnosis for patients presenting with lymphangitic streaking.\n\n\nMETHODS\nWe performed a review of the literature, including all available reports pertaining to nonbacterial causes of lymphangitic streaking.\n\n\nRESULTS\nVarious nonbacterial causes can result in lymphangitic streaking, including viral and fungal infections, insect or spider bites, and iatrogenic etiologies.\n\n\nCONCLUSION\nAwareness of potential nonbacterial causes of superficial lymphangitis is important to avoid misdiagnosis and delay the administration of appropriate care.", "title": "" }, { "docid": "2c93fcf96c71c7c0a8dcad453da53f81", "text": "Production cars are designed to understeer and rarely do they oversteer. If a car could automatically compensate for an understeer/oversteer problem, the driver would enjoy nearly neutral steering under varying operating conditions. Four-wheel steering is a serious effort on the part of automotive design engineers to provide near-neutral steering. Also in situations like low speed cornering, vehicle parking and driving in city conditions with heavy traffic in tight spaces, driving would be very difficult due to vehicle’s larger wheelbase and track width. Hence there is a requirement of a mechanism which result in less turning radius and it can be achieved by implementing four wheel steering mechanism instead of regular two wheel steering. In this project Maruti Suzuki 800 is considered as a benchmark vehicle. The main aim of this project is to turn the rear wheels out of phase to the front wheels. In order to achieve this, a mechanism which consists of two bevel gears and intermediate shaft which transmit 100% torque as well turns rear wheels in out of phase was developed. The mechanism was modelled using CATIA and the motion simulation was done using ADAMS. A physical prototype was realised. The prototype was tested for its cornering ability through constant radius test and was found 50% reduction in turning radius and the vehicle was operated at low speed of 10 kmph.", "title": "" }, { "docid": "2fbad4d6dddfc681361e5534fb28bb5d", "text": "We describe an experiment conducted in Bangalore, India, for incentivizing a population of commuters to travel at less congested times. The goals were to reduce the travel times and increase the travel comfort of commuters, and to reduce congestion, fuel consumption and pollution. The project, called INSTANT (for the Infosys-Stanford Traffic project), ran for six months from Oct 6, 2008, to April 10, 2009 and incentivized the employees of Infosys Technologies, Bangalore. We describe the background, the incentive mechanism and the results. The INSTANT project involved about 14,000 commuters and succeeded in incentivizing many commuters to travel at uncongested times, thereby significantly reducing their commute times. Our approach of providing incentives to decongestors contrasts with the current practice of charging congestors. It also contrasts with prior work on “selfish routing” which focusses on the reaction of commuters to a spatial choice of routes (same time, different routes) as opposed to a temporal choice (same route, different times).", "title": "" }, { "docid": "90241619360fe97b83e2777438a6c4f8", "text": "Although K-means clustering algorithm is simple and popular, it has a fundamental drawback of falling into local optima that depend on the randomly generated initial centroid values. Optimization algorithms are well known for their ability to guide iterative computation in searching for global optima. They also speed up the clustering process by achieving early convergence. Contemporary optimization algorithms inspired by biology, including the Wolf, Firefly, Cuckoo, Bat and Ant algorithms, simulate swarm behavior in which peers are attracted while steering towards a global objective. It is found that these bio-inspired algorithms have their own virtues and could be logically integrated into K-means clustering to avoid local optima during iteration to convergence. In this paper, the constructs of the integration of bio-inspired optimization methods into K-means clustering are presented. The extended versions of clustering algorithms integrated with bio-inspired optimization methods produce improved results. Experiments are conducted to validate the benefits of the proposed approach.", "title": "" }, { "docid": "441a81522a192467b128b59d7af4c39c", "text": "Selective ensemble is a learning paradigm that follows an “overproduce and choose” strategy, where a number of candidate classifiers are trained, and a set of several classifiers that are accurate and diverse are selected to solve a problem. In this paper, the hybrid approach called D3C is presented; this approach is a hybrid model of ensemble pruning that is based on k-means clustering and the framework of dynamic selection and circulating in combination with a sequential search method. Additionally, a multilabel D3C is derived from D3C through employing a problem transformation for multi-label classification. Empirical study shows that D3C exhibits competitive performance against other high-performance methods, and experiments in multi-label datasets verify the feasibility of multi-label D3C. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "324773d7d78ab86e63c0df59929c454b", "text": "Academic search concerns the retrieval and profiling of information objects in the domain of academic research. In this paper we reveal important observations of academic search queries, and provide an algorithmic solution to address a type of failure during search sessions: null queries. We start by providing a general characterization of academic search queries, by analyzing a large-scale transaction log of a leading academic search engine. Unlike previous small-scale analyses of academic search queries, we find important differences with query characteristics known from web search. E.g., in academic search there is a substantially bigger proportion of entity queries, and a heavier tail in query length distribution. We then focus on search failures and, in particular, on null queries that lead to an empty search engine result page, on null sessions that contain such null queries, and on users who are prone to issue null queries. In academic search approximately 1 in 10 queries is a null query, and 25% of the sessions contain a null query. They appear in different types of search sessions, and prevent users from achieving their search goal. To address the high rate of null queries in academic search, we consider the task of providing query suggestions. Specifically we focus on a highly frequent query type: non-boolean informational queries. To this end we need to overcome query sparsity and make effective use of session information. We find that using entities helps to surface more relevant query suggestions in the face of query sparsity. We also find that query suggestions should be conditioned on the type of session in which they are offered to be more effective. After casting the session classification problem as a multi-label classification problem, we generate session-conditional query suggestions based on predicted session type. We find that this session-conditional method leads to significant improvements over a generic query suggestion method. Personalization yields very little further improvements over session-conditional query suggestions. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e48dae70582d949a60a5f6b5b05117a7", "text": "Background: Multiple-Valued Logic (MVL) is the non-binary-valued system, in which more than two levels of information content are available, i.e., L>2. In modern technologies, the dual level binary logic circuits have normally been used. However, these suffer from several significant issues such as the interconnection considerations including the parasitics, area and power dissipation. The MVL circuits have been proved to be consisting of reduced circuitry and increased efficiency in terms of higher utilization of the circuit resources through multiple levels of voltage. Innumerable algorithms have been developed for designing such MVL circuits. Extended form is one of the algebraic techniques used in designing these MVL circuits. Voltage mode design has also been employed for constructing various types of MVL circuits. Novelty: This paper proposes a novel MVLTRANS inverter, designed using conventional CMOS and pass transistor logic based MVLPTL inverter. Binary to MVL Converter/Encoder and MVL to binary Decoder/Converter are also presented in the paper. In addition to the proposed decoder circuit, a 4-bit novel MVL Binary decoder circuit is also proposed. Tools Used: All these circuits are designed, implemented and verified using Cadence® Virtuoso tools using 180 nm technology library.", "title": "" }, { "docid": "b9bf838263410114ec85c783d26d92aa", "text": "We give a denotational framework (a “meta model”) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.", "title": "" }, { "docid": "97d49038cbea67d6bb9ef39d86e8cceb", "text": "Sibs with apparent Dyggve-Melchior-Clausen (DMC) dwarfism and normal intelligence are described. Three other familial and 3 sporadic cases with DMC dwarfism and normal intelligence are known. Twelve familial and 9 sporadic cases are known with the usual combination of DMC dwarfism and severe mental retardation. Since the two conditions appear to breed true they seem to be genetically different. We propose to name the former “Smith-McCort dwarfism” to clearly distinguish it from the DMC syndrome in which mental retardation is a constituent part. Both conditions are inherited as autosomal recessive traits. Spinal cord compression due to atlantoaxial instability is a serious and preventable complication of both disorders.", "title": "" }, { "docid": "e7100965a09aa55a5fe17959443e9004", "text": "Prior studies (Gergely et al., 1995; Woodward, 1998) have found that infants focus on the goals of an action over other details. The current studies tested whether infants would distinguish between a behavior that seemed to be goal-directed and one that seemed not to be. Infants in one condition saw an actor grasp one of two toys that sat side by side on a stage. Infants in the other condition saw the actor drop her hand onto one of the toys in a manner that looked unintentional. Once infants had been habituated to these events, they were shown test events in which either the path of motion or the object that was touched had changed. Nine-month-olds differentiated between these two actions. When they saw the actor grasp the toy, they looked longer on trials with a change in goal object than on trials with a change in path. When they saw the actor drop her hand onto the toy, they looked equally at the two test events. These findings did not result from infants being more interested in grasping as compared to inert hands. In a second study, 5-month-old infants showed patterns similar to those seen in 9-month-olds. These findings have implications for theories of the development of the concept of intention. They argue against the claim that infants are innately predisposed to interpret any motion of an animate agent as intentional.", "title": "" }, { "docid": "1e0fb0f628bd19231eeb0e36232c95ed", "text": "UsersÕ information needs are often too complex to be effectively expressed in standard query interfaces to full-text retrieval systems. A typical need is to find documents that are similar to a given source document, yet describing the content of a document in a few terms is a difficult task. We describe Phrasier, an interactive system for browsing, querying and relating documents within a digital library. Phrasier exploits keyphrases that have been automatically extracted from source documents to create links to similar documents and to suggest appropriate query phrases to users. PhrasierÕs keyphrase-based retrieval engine returns ranked lists of documents that are similar to a given source text. Evaluation indicates that PhrasierÕs keyphrase-based retrieval performs as well as full-text retrieval when recall and relevance scores assigned by human assessors are considered.", "title": "" } ]
scidocsrr
1c36fdc20ca0046a4806cf51a002bcf7
Towards a Dataset for Natural Language Requirements Processing
[ { "docid": "fb71d22cad59ba7cf5b9806e37df3340", "text": "Templates are effective tools for increasing the precision of natural language requirements and for avoiding ambiguities that may arise from the use of unrestricted natural language. When templates are applied, it is important to verify that the requirements are indeed written according to the templates. If done manually, checking conformance to templates is laborious, presenting a particular challenge when the task has to be repeated multiple times in response to changes in the requirements. In this article, using techniques from natural language processing (NLP), we develop an automated approach for checking conformance to templates. Specifically, we present a generalizable method for casting templates into NLP pattern matchers and reflect on our practical experience implementing automated checkers for two well-known templates in the requirements engineering community. We report on the application of our approach to four case studies. Our results indicate that: (1) our approach provides a robust and accurate basis for checking conformance to templates; and (2) the effectiveness of our approach is not compromised even when the requirements glossary terms are unknown. This makes our work particularly relevant to practice, as many industrial requirements documents have incomplete glossaries.", "title": "" }, { "docid": "ab7d6a9c9c07ee1a60f01d4017a3a25b", "text": "[Context and Motivation] Many a tool for finding ambiguities in natural language (NL) requirements specifications (RSs) is based on a parser and a parts-of-speech identifier, which are inherently imperfect on real NL text. Therefore, any such tool inherently has less than 100% recall. Consequently, running such a tool on a NL RS for a highly critical system does not eliminate the need for a complete manual search for ambiguity in the RS. [Question/Problem] Can an ambiguity-finding tool (AFT) be built that has 100% recall on the types of ambiguities that are in the AFT’s scope such that a manual search in an RS for ambiguities outside the AFT’s scope is significantly easier than a manual search of the RS for all ambiguities? [Principal Ideas/Results] This paper presents the design of a prototype AFT, SREE (Systemized Requirements Engineering Environment), whose goal is achieving a 100% recall rate for the ambiguities in its scope, even at the cost of a precision rate of less than 100%. The ambiguities that SREE searches for by lexical analysis are the ones whose keyword indicators are found in SREE’s ambiguity-indicator corpus that was constructed based on studies of several industrial strength RSs. SREE was run on two of these industrial strength RSs, and the time to do a completely manual search of these RSs is compared to the time to reject the false positives in SREE’s output plus the time to do a manual search of these RSs for only ambiguities not in SREE’s scope. [Contribution] SREE does not achieve its goals. However, the time comparison shows that the approach to divide ambiguity finding between an AFT with 100% recall for some types of ambiguity and a manual search for only the other types of ambiguity is promising enough to justify more work to improve the implementation of the approach. Some specific improvement suggestions are offered.", "title": "" }, { "docid": "d6e093ecc3325fcdd2e29b0b961b9b21", "text": "[Context and motivation] Natural language is the main representation means of industrial requirements documents, which implies that requirements documents are inherently ambiguous. There exist guidelines for ambiguity detection, such as the Ambiguity Handbook [1]. In order to detect ambiguities according to the existing guidelines, it is necessary to train analysts. [Question/problem] Although ambiguity detection guidelines were extensively discussed in literature, ambiguity detection has not been automated yet. Automation of ambiguity detection is one of the goals of the presented paper. More precisely, the approach and tool presented in this paper have three goals: (1) to automate ambiguity detection, (2) to make plausible for the analyst that ambiguities detected by the tool represent genuine problems of the analyzed document, and (3) to educate the analyst by explaining the sources of the detected ambiguities. [Principal ideas/results] The presented tool provides reliable ambiguity detection, in the sense that it detects four times as many genuine ambiguities as than an average human analyst. Furthermore, the tool offers high precision ambiguity detection and does not present too many false positives to the human analyst. [Contribution] The presented tool is able both to detect the ambiguities and to explain ambiguity sources. Thus, besides pure ambiguity detection, it can be used to educate analysts, too. Furthermore, it provides a significant potential for considerable time and cost savings and at the same time quality improvements in the industrial requirements engineering.", "title": "" } ]
[ { "docid": "2b3335d6fb1469c4848a201115a78e2c", "text": "Laser grooving is used for the singulation of advanced CMOS wafers since it is believed that it exerts lower mechanical stress than traditional blade dicing. The very local heating of wafers, however, might result in high thermal stress around the heat affected zone. In this work we present a model to predict the temperature distribution, material removal, and the resulting stress, in a sandwiched structure of metals and dielectric materials that are commonly found in the back-end of line of semiconductor wafers. Simulation results on realistic three dimensional back-end structures reveal that the presence of metals clearly affects both the ablation depth, and the stress in the material. Experiments showed a similar observation for the ablation depth. The shape of the crater, however, was found to be more uniform than predicted by simulations, which is probably due to the redistribution of molten metal.", "title": "" }, { "docid": "1c2acb749d89626cd17fd58fd7f510e3", "text": "The lack of control of the content published is broadly regarded as a positive aspect of the Web, assuring freedom of speech to its users. On the other hand, there is also a lack of control of the content accessed by users when browsing Web pages. In some situations this lack of control may be undesired. For instance, parents may not desire their children to have access to offensive content available on the Web. In particular, accessing Web pages with nude images is among the most common problem of this sort. One way to tackle this problem is by using automated offensive image detection algorithms which can filter undesired images. Recent approaches on nude image detection use a combination of features based on color, texture, shape and other low level features in order to describe the image content. These features are then used by a classifier which is able to detect offensive images accordingly. In this paper we propose SNIF - simple nude image finder - which uses a color based feature only, extracted by an effective and efficient algorithm for image description, the border/interior pixel classification (BIC), combined with a machine learning technique, namely support vector machines (SVM). SNIF uses a simpler feature model when compared to previously proposed methods, which makes it a fast image classifier. The experiments carried out depict that the proposed method, despite its simplicity, is capable to identify up to 98% of nude images from the test set. This indicates that SNIF is as effective as previously proposed methods for detecting nude images.", "title": "" }, { "docid": "ccff1c7fa149a033b49c3a6330d4e0f3", "text": "Stroke is the leading cause of permanent adult disability in the U.S., frequently resulting in chronic motor impairments. Rehabilitation of the upper limb, particularly the hand, is especially important as arm and hand deficits post-stroke limit the performance of activities of daily living and, subsequently, functional independence. Hand rehabilitation is challenging due to the complexity of motor control of the hand. New instrumentation is needed to facilitate examination of the hand. Thus, a novel actuated exoskeleton for the index finger, the FingerBot, was developed to permit the study of finger kinetics and kinematics under a variety of conditions. Two such novel environments, one applying a spring-like extension torque proportional to angular displacement at each finger joint and another applying a constant extension torque at each joint, were compared in 10 stroke survivors with the FingerBot. Subjects attempted to reach targets located throughout the finger workspace. The constant extension torque assistance resulted in a greater workspace area (p < 0.02) and a larger active range of motion for the metacarpophalangeal joint (p < 0.01) than the spring-like assistance. Additionally, accuracy in terms of reaching the target was greater with the constant extension assistance as compared to no assistance. The FingerBot can be a valuable tool in assessing various hand rehabilitation paradigms following stroke.", "title": "" }, { "docid": "c60ffb344e85887e06ed178d4941eb0e", "text": "Multi-label learning arises in many real-world tasks where an object is naturally associated with multiple concepts. It is well-accepted that, in order to achieve a good performance, the relationship among labels should be exploited. Most existing approaches require the label relationship as prior knowledge, or exploit by counting the label co-occurrence. In this paper, we propose the MAHR approach, which is able to automatically discover and exploit label relationship. Our basic idea is that, if two labels are related, the hypothesis generated for one label can be helpful for the other label. MAHR implements the idea as a boosting approach with a hypothesis reuse mechanism. In each boosting round, the base learner for a label is generated by not only learning on its own task but also reusing the hypotheses from other labels, and the amount of reuse across labels provides an estimate of the label relationship. Extensive experimental results validate that MAHR is able to achieve superior performance and discover reasonable label relationship. Moreover, we disclose that the label relationship is usually asymmetric.", "title": "" }, { "docid": "7c525afc11c41e0a8ca6e8c48bdec97c", "text": "AT commands, originally designed in the early 80s for controlling modems, are still in use in most modern smartphones to support telephony functions. The role of AT commands in these devices has vastly expanded through vendor-specific customizations, yet the extent of their functionality is unclear and poorly documented. In this paper, we systematically retrieve and extract 3,500 AT commands from over 2,000 Android smartphone firmware images across 11 vendors. We methodically test our corpus of AT commands against eight Android devices from four different vendors through their USB interface and characterize the powerful functionality exposed, including the ability to rewrite device firmware, bypass Android security mechanisms, exfiltrate sensitive device information, perform screen unlocks, and inject touch events solely through the use of AT commands. We demonstrate that the AT command interface contains an alarming amount of unconstrained functionality and represents a broad attack surface on Android devices.", "title": "" }, { "docid": "d56855e068a4524fda44d93ac9763cab", "text": "greatest cause of mortality from cardiovascular disease, after myocardial infarction and cerebrovascular stroke. From hospital epidemiological data it has been calculated that the incidence of PE in the USA is 1 per 1,000 annually. The real number is likely to be larger, since the condition goes unrecognised in many patients. Mortality due to PE has been estimated to exceed 15% in the first three months after diagnosis. PE is a dramatic and life-threatening complication of deep venous thrombosis (DVT). For this reason, the prevention, diagnosis and treatment of DVT is of special importance, since symptomatic PE occurs in 30% of those affected. If asymptomatic episodes are also included, it is estimated that 50-60% of DVT patients develop PE. DVT and PE are manifestations of the same entity, namely thromboembolic disease. If we extrapolate the epidemiological data from the USA to Greece, which has a population of about ten million, 20,000 new cases of thromboembolic disease may be expected annually. Of these patients, PE will occur in 10,000, of which 6,000 will have symptoms and 900 will die during the first trimester.", "title": "" }, { "docid": "ed9c0cdb74950bf0f1288931707b9d08", "text": "Introduction This chapter reviews the theoretical and empirical literature on the concept of credibility and its areas of application relevant to information science and technology, encompassing several disciplinary approaches. An information seeker's environment—the Internet, television, newspapers, schools, libraries, bookstores, and social networks—abounds with information resources that need to be evaluated for both their usefulness and their likely level of accuracy. As people gain access to a wider variety of information resources, they face greater uncertainty regarding who and what can be believed and, indeed, who or what is responsible for the information they encounter. Moreover, they have to develop new skills and strategies for determining how to assess the credibility of an information source. Historically, the credibility of information has been maintained largely by professional knowledge workers such as editors, reviewers, publishers, news reporters, and librarians. Today, quality control mechanisms are evolving in such a way that a vast amount of information accessed through a wide variety of systems and resources is out of date, incomplete, poorly organized, or simply inaccurate (Janes & Rosenfeld, 1996). Credibility has been examined across a number of fields ranging from communication, information science, psychology, marketing, and the management sciences to interdisciplinary efforts in human-computer interaction (HCI). Each field has examined the construct and its practical significance using fundamentally different approaches, goals, and presuppositions, all of which results in conflicting views of credibility and its effects. The notion of credibility has been discussed at least since Aristotle's examination of ethos and his observations of speakers' relative abilities to persuade listeners. Disciplinary approaches to investigating credibility systematically developed only in the last century, beginning within the field of communication. A landmark among these efforts was the work of Hovland and colleagues (Hovland, Jannis, & Kelley, 1953; Hovland & Weiss, 1951), who focused on the influence of various characteristics of a source on a recipient's message acceptance. This work was followed by decades of interest in the relative credibility of media involving comparisons between newspapers, radio, television, Communication researchers have tended to focus on sources and media, viewing credibility as a perceived characteristic. Within information science, the focus is on the evaluation of information, most typically instantiated in documents and statements. Here, credibility has been viewed largely as a criterion for relevance judgment, with researchers focusing on how information seekers assess a document's likely level of This brief account highlights an often implicit focus on varying objects …", "title": "" }, { "docid": "9e037018da3ebcd7967b9fbf07c83909", "text": "Studying temporal dynamics of topics in social media is very useful to understand online user behaviors. Most of the existing work on this subject usually monitors the global trends, ignoring variation among communities. Since users from different communities tend to have varying tastes and interests, capturing communitylevel temporal change can improve the understanding and management of social content. Additionally, it can further facilitate the applications such as community discovery, temporal prediction and online marketing. However, this kind of extraction becomes challenging due to the intricate interactions between community and topic, and intractable computational complexity. In this paper, we take a unified solution towards the communitylevel topic dynamic extraction. A probabilistic model, CosTot (Community Specific Topics-over-Time) is proposed to uncover the hidden topics and communities, as well as capture community-specific temporal dynamics. Specifically, CosTot considers text, time, and network information simultaneously, and well discovers the interactions between community and topic over time. We then discuss the approximate inference implementation to enable scalable computation of model parameters, especially for large social data. Based on this, the application layer support for multi-scale temporal analysis and community exploration is also investigated. We conduct extensive experimental studies on a large real microblog dataset, and demonstrate the superiority of proposed model on tasks of time stamp prediction, link prediction and topic perplexity.", "title": "" }, { "docid": "8c4e02333f466c074ad332d904f655b9", "text": "Context. The global communication system is in a tremendous growth, leading to wide range of data generation. The Telecom operators in various Telecom Industries, that generate large amount of data has a need to manage these data efficiently. As the technology involved in the database management systems is increasing, there is a remarkable growth of NoSQL databases in the 20 century. Apache Cassandra is an advanced NoSQL database system, which is popular for handling semi-structured and unstructured format of Big Data. Cassandra has an effective way of compressing data by using different compaction strategies. This research is focused on analyzing the performances of different compaction strategies in different use cases for default Cassandra stress model. The analysis can suggest better usage of compaction strategies in Cassandra, for a write heavy workload. Objectives. In this study, we investigate the appropriate performance metrics to evaluate the performance of compaction strategies. We provide the detailed analysis of Size Tiered Compaction Strategy, Date Tiered Compaction Strategy, and Leveled Compaction Strategy for a write heavy (90/10) work load, using default cassandra stress tool. Methods. A detailed literature research has been conducted to study the NoSQL databases, and the working of different compaction strategies in Apache Cassandra. The performances metrics are considered by the understanding of the literature research conducted, and considering the opinions of supervisors and Ericsson’s Apache Cassandra team. Two different tools were developed for collecting the performances of the considered metrics. The first tool was developed using Jython scripting language to collect the cassandra metrics, and the second tool was developed using python scripting language to collect the Operating System metrics. The graphs have been generated in Microsoft Excel, using the values obtained from the scripts. Results. Date Tiered Compaction Strategy and Size Tiered Compaction strategy showed more or less similar behaviour during the stress tests conducted. Level Tiered Compaction strategy has showed some remarkable results that effected the system performance, as compared to date tiered compaction and size tiered compaction strategies. Date tiered compaction strategy does not perform well for default cassandra stress model. Size tiered compaction can be preferred for default cassandra stress model, but not considerable for big data. Conclusions. With a detailed analysis and logical comparison of metrics, we finally conclude that Level Tiered Compaction Strategy performs better for a write heavy (90/10) workload while using default cassandra stress model, as compared to size tiered compaction and date tiered compaction strategies.", "title": "" }, { "docid": "9a3b5431f0105949db86c278d3e97186", "text": "The vertex-centric programming model is an established computational paradigm recently incorporated into distributed processing frameworks to address challenges in large-scale graph processing. Billion-node graphs that exceed the memory capacity of commodity machines are not well supported by popular Big Data tools like MapReduce, which are notoriously poor performing for iterative graph algorithms such as PageRank. In response, a new type of framework challenges one to “think like a vertex” (TLAV) and implements user-defined programs from the perspective of a vertex rather than a graph. Such an approach improves locality, demonstrates linear scalability, and provides a natural way to express and compute many iterative graph algorithms. These frameworks are simple to program and widely applicable but, like an operating system, are composed of several intricate, interdependent components, of which a thorough understanding is necessary in order to elicit top performance at scale. To this end, the first comprehensive survey of TLAV frameworks is presented. In this survey, the vertex-centric approach to graph processing is overviewed, TLAV frameworks are deconstructed into four main components and respectively analyzed, and TLAV implementations are reviewed and categorized.", "title": "" }, { "docid": "10f31578666795a3b1ad852929769fc5", "text": "CNNs have been successfully used in audio, image and text classification, analysis and generation [12,17,18], whereas the RNNs with LSTM cells [5,6] have been widely adopted for solving sequence transduction problems such as language modeling and machine translation [19,3,5]. The RNN models typically align the element positions of the input and output sequences to steps in computation time for generating the sequenced hidden states, with each depending on the current element and the previous hidden state. Such operations are inherently sequential which precludes parallelization and becomes the performance bottleneck. This situation has motivated researchers to extend the easily parallelizable CNN models for more efficient sequence-to-sequence mapping. Once such efforts can deliver satisfactory quality, the usage of CNN in deep learning would be significantly broadened.", "title": "" }, { "docid": "9d1c0462c27516974a2b4e520916201e", "text": "The current method of grading prostate cancer on histology uses the Gleason system, which describes five increasingly malignant stages of cancer according to qualitative analysis of tissue architecture. The Gleason grading system has been shown to suffer from inter- and intra-observer variability. In this paper we present a new method for automated and quantitative grading of prostate biopsy specimens. A total of 102 graph-based, morphological, and textural features are extracted from each tissue patch in order to quantify the arrangement of nuclei and glandular structures within digitized images of histological prostate tissue specimens. A support vector machine (SVM) is used to classify the digitized histology slides into one of four different tissue classes: benign epithelium, benign stroma, Gleason grade 3 adenocarcinoma, and Gleason grade 4 adenocarcinoma. The SVM classifier was able to distinguish between all four types of tissue patterns, achieving an accuracy of 92.8% when distinguishing between Gleason grade 3 and stroma, 92.4% between epithelium and stroma, and 76.9% between Gleason grades 3 and 4. Both textural and graph-based features were found to be important in discriminating between different tissue classes. This work suggests that the current Gleason grading scheme can be improved by utilizing quantitative image analysis to aid pathologists in producing an accurate and reproducible diagnosis", "title": "" }, { "docid": "9b5224b94b448d5dabbd545aedd293f8", "text": "the topic (a) has been dedicated to extolling its use as a decisionmaking criterion; (b) has presented isolated numerical examples of its calculation/determination; and (c) has considered it as part of the general discussions of profitability and discussed its role in customer acquisition decisions and customer acquisition/retention trade-offs. There has been a dearth of general modeling of the topic. This paper presents a series of mathematical models for determination of customer lifetime value. The choice of the models is based on a systematic theoretical taxonomy and on assumptions grounded in customer behavior. In NADA I. NASR is a doctoral student in Marketing at the School addition, selected managerial applications of these general models of of Management, Boston University. customer lifetime value are offered. 1998 John Wiley & Sons, Inc. and Direct Marketing Educational Foundation, Inc. CCC 1094-9968/98/010017-14", "title": "" }, { "docid": "79102cc14ce0d11b52b4288d2e52de10", "text": "This paper presents a text detection method based on Extremal Regions (ERs) and Corner-HOG feature. Local Histogram of Oriented Gradient (HOG) extracted around corners (Corner-HOG) is used to effectively prune the non-text components in the component tree. Experimental results show that the Corner-HOG based pruning method can discard an average of 83.06% of all ERs in an image while preserving a recall of 90.51% of the text components. The remaining ERs are then grouped into text lines and candidate text lines are verified using black-white transition feature and the covariance descriptor of HOG. Experimental results on the 2011 Robust Reading Competition dataset show that the proposed text detection method provides promising performance.", "title": "" }, { "docid": "e017a4bed5bec5bb212bb82e78d68236", "text": "Patent claim sentences, despite their legal importance in patent documents, still pose difficulties for state-of-the-art statistical machine translation (SMT) systems owing to their extreme lengths and their special sentence structure. This paper describes a method for improving the translation quality of claim sentences, by taking into account the features specific to the claim sublanguage. Our method overcomes the issue of special sentence structure, by transferring the sublanguage-specific sentence structure (SSSS) from the source language to the target language, using a set of synchronous context-free grammar rules. Our method also overcomes the issue of extreme lengths by taking the sentence components to be the processing unit for SMT. The results of an experiment demonstrate that our SSSS transfer method, used in conjunction with pre-ordering, significantly improves the translation quality in terms of BLEU scores by five points, in both English-to-Japanese and Japanese-to-English directions. The experiment also shows that the SSSS transfer method significantly improves structural appropriateness in the translated sentences in both translation directions, which is indicated by substantial gains over 30 points in RIBES scores.", "title": "" }, { "docid": "f06d083ebd1449b1fd84e826898c2fda", "text": "The resolution of any linear imaging system is given by its point spread function (PSF) that quantifies the blur of an object point in the image. The sharper the PSF, the better the resolution is. In standard fluorescence microscopy, however, diffraction dictates a PSF with a cigar-shaped main maximum, called the focal spot, which extends over at least half the wavelength of light (λ = 400–700 nm) in the focal plane and >λ along the optical axis (z). Although concepts have been developed to sharpen the focal spot both laterally and axially, none of them has reached their ultimate goal: a spherical spot that can be arbitrarily downscaled in size. Here we introduce a fluorescence microscope that creates nearly spherical focal spots of 40–45 nm (λ/16) in diameter. Fully relying on focused light, this lens-based fluorescence nanoscope unravels the interior of cells noninvasively, uniquely dissecting their sub-λ–sized organelles.", "title": "" }, { "docid": "a25c24018499ae1da6d5ff50c2412fec", "text": "In the rapid change of drug scenarios, as the powerful development in the drug market, particularly in the number and the kind of the compound available, Internet plays a dominant role to become one of the major \"drug market\". The European Commission funded the Psychonaut Web Mapping Project (carried out in the time-frame January 2008-December 2009), with the aim to start/implement an Early Warning System (through the data/information collected from the Web virtual market), to identify and categorise novel recreational drugs/psychoactive compounds (synthetical/herbal drugs), and new trends in drug use to provide information for immediate and prevention intervention. The Psychonaut is a multi-site research project involving 8 research centres (De Sleutel, Belgium; University of Hertfordshire School of Pharmacy, St George's University of London, England; A-klinikkasäätiö, Finlandia; Klinik für Psychiatrie und Psychotherapie, Germany; Assessorato Salute Regione Marche, Italy; Drug Abuse Unit, Spain; Centre of Competence Bergen Clinics Foundation, Norway) based in 7 European Countries (England, Italy, Belgium, Finland, Germany, Spain, Norway).", "title": "" }, { "docid": "9d9e9a25e19c83a2a435128823a6519a", "text": "The rapid deployment of millions of mobile sensors and smartphones has resulted in a demand for opportunistic encounter-based networking to support mobile social networking applications and proximity-based gaming. However, the success of these emerging networks is limited by the lack of effective and energy efficient neighbor discovery protocols. While probabilistic approaches perform well for the average case, they exhibit long tails resulting in high upper bounds on neighbor discovery time. Recent deterministic protocols, which allow nodes to wake up at specific timeslots according to a particular pattern, improve on the worst case bound, but do so by sacrificing average case performance. In response to these limitations, we have designed Searchlight, a highly effective asynchronous discovery protocol that is built on three basic ideas. First, it leverages the constant offset between periodic awake slots to design a simple probing-based approach to ensure discovery. Second, it allows awake slots to cover larger sections of time, which ultimately reduces total awake time drastically. Finally, Searchlight has the option to employ probabilistic techniques with its deterministic approach that can considerably improve its performance in the average case when all nodes have the same duty cycle. We validate Searchlight through analysis and real-world experiments on smartphones that show considerable improvement (up to 50%) in worst-case discovery latency over existing approaches in almost all cases, irrespective of duty cycle symmetry.", "title": "" }, { "docid": "48a4d6b30131097d721905ae148a03dd", "text": "68 AI MAGAZINE ■ I claim that achieving real human-level artificial intelligence would necessarily imply that most of the tasks that humans perform for pay could be automated. Rather than work toward this goal of automation by building special-purpose systems, I argue for the development of general-purpose, educable systems that can learn and be taught to perform any of the thousands of jobs that humans can perform. Joining others who have made similar proposals, I advocate beginning with a system that has minimal, although extensive, built-in capabilities. These would have to include the ability to improve through learning along with many other abilities.", "title": "" }, { "docid": "8204901127ca96822f86ada6f97aac8e", "text": "This paper suggests a fast and stable Target Tracking system in collaborative control of UAV and UGV. Wi-Fi communication range is limited in collaborative control of UAV and UGV. Thus, to secure a stable communications, UAV and UGV have to be kept within a certain distance from each other. But existing method which uses UAV Vertical Camera to follow the motion of UGV is likely to lose a target with a sudden movement change. Eventually, UGV has disadvantages that it could only move at a low speed and not make any sudden change of direction in order to keep track of the target. Therefore, we suggest utilizing AR Drone UAV front camera to track fast-moving and Omnidirectional Mecanum Wheel UGV. Keywords—Collaborative control, UAV, UGV, Target Tracking.", "title": "" } ]
scidocsrr
d8bd4cc4a67ef5481840a162b55834a1
A Hybrid Approach to Rumour Detection in Microblogging Platforms
[ { "docid": "d1fa7cf9a48f1ad5502f6aec2981f79a", "text": "Despite the increasing use of social media platforms for information and news gathering, its unmoderated nature often leads to the emergence and spread of rumours, i.e., items of information that are unverified at the time of posting. At the same time, the openness of social media platforms provides opportunities to study how users share and discuss rumours, and to explore how to automatically assess their veracity, using natural language processing and data mining techniques. In this article, we introduce and discuss two types of rumours that circulate on social media: long-standing rumours that circulate for long periods of time, and newly emerging rumours spawned during fast-paced events such as breaking news, where reports are released piecemeal and often with an unverified status in their early stages. We provide an overview of research into social media rumours with the ultimate goal of developing a rumour classification system that consists of four components: rumour detection, rumour tracking, rumour stance classification, and rumour veracity classification. We delve into the approaches presented in the scientific literature for the development of each of these four components. We summarise the efforts and achievements so far toward the development of rumour classification systems and conclude with suggestions for avenues for future research in social media mining for the detection and resolution of rumours.", "title": "" }, { "docid": "f517c85255d14b223ab14e8a213a9026", "text": "Tools that are able to detect unverified information posted on social media during a news event can help to avoid the spread of rumours that turn out to be false. In this paper we compare a novel approach using Conditional Random Fields that learns from the sequential dynamics of social media posts with the current state-of-the-art rumour detection system, as well as other baselines. In contrast to existing work, our classifier does not need to observe tweets querying the stance of a post to deem it a rumour but, instead, exploits context learned during the event. Our classifier has improved precision and recall over the state-of-the-art classifier that relies on querying tweets, as well as outperforming our best baseline. Moreover, the results provide evidence for the generalisability of our classifier.", "title": "" } ]
[ { "docid": "ee5729a9ec24fbb951076a43d4945e8e", "text": "Enhancing the performance of emotional speaker recognition process has witnessed an increasing interest in the last years. This paper highlights a methodology for speaker recognition under different emotional states based on the multiclass Support Vector Machine (SVM) classifier. We compare two feature extraction methods which are used to represent emotional speech utterances in order to obtain best accuracies. The first method known as traditional Mel-Frequency Cepstral Coefficients (MFCC) and the second one is MFCC combined with Shifted-Delta-Cepstra (MFCC-SDC). Experimentations are conducted on IEMOCAP database using two multiclass SVM approaches: One-Against-One (OAO) and One Against-All (OAA). Obtained results show that MFCC-SDC features outperform the conventional MFCC. Keywords—Emotion; Speaker recognition; Mel Frequency Cepstral Coefficients (MFCC); Shifted-Delta-Cepstral (SDC); SVM", "title": "" }, { "docid": "e9a154af3a041cadc5986b7369ce841b", "text": "Metrological characterization of high-performance ΔΣ Analog-to-Digital Converters (ADCs) poses severe challenges to reference instrumentation and standard methods. In this paper, most important tests related to noise and effective resolution, nonlinearity, environmental uncertainty, and stability are proved and validated in the specific case of a high-performance ΔΣ ADC. In particular, tests setups are proposed and discussed and the definitions used to assess the performance are clearly stated in order to identify procedures and guidelines for high-resolution ADCs characterization. An experimental case study of the high-performance ΔΣ ADC DS-22 developed at CERN is reported and discussed by presenting effective alternative test setups. Experimental results show that common characterization methods by the IEEE standards 1241 [1] and 1057 [2] cannot be used and alternative strategies turn out to be effective.", "title": "" }, { "docid": "cd0ad1783e0ef64300cd59bb2fab27d1", "text": "Game Theory (GT) has been used with excellent results to model and optimize the operation of a huge number of real-world systems, including in communications and networking. Using a tutorial style, this paper surveys and updates the literature contributions that have applied a diverse set of theoretical games to solve a variety of challenging problems, namely in wireless data communication networks. During our literature discussion, the games are initially divided into three groups: classical, evolutionary, and incomplete information. Then, the classical games are further divided into three subgroups: non-cooperative, repeated, and cooperative. This paper reviews applications of games to develop adaptive algorithms and protocols for the efficient operation of some standardized uses cases at the edge of emerging heterogeneous networks. Finally, we highlight the important challenges, open issues, and future research directions where GT can bring beneficial outcomes to emerging wireless data networking applications.", "title": "" }, { "docid": "72fea311bf7519db3d4c081a514709ab", "text": "This paper proposes a frequency-modulation control scheme for a dc/dc current-source parallel-resonant converter with two possible configurations. The basic configuration comprises an external voltage loop, an internal current loop, and a frequency modulator: the voltage loop is responsible for regulating the output voltage, the current loop makes the system controllable and limits the input current, and the modulator provides robustness against variations in resonant component values. The enhanced configuration introduces the output inductor current as a feed-forward term and clearly improves the transient response to fast load changes. The theoretical design of these control schemes is performed systematically by first deriving their small-signal models and second using Bode diagram analysis. The actual performance of the proposed control schemes is experimentally validated by testing on a laboratory prototype.", "title": "" }, { "docid": "d928f199fe3ececa09033ac636f5a147", "text": "The paper reviews the development of the energy system simulation tool DNA (Dynamic Network Analysis). DNA has been developed since 1989 to be able to handle models of any kind of energy system based on the control volume approach, usually systems of lumped parameter components. DNA has proven to be a useful tool in the analysis and optimization of several types of thermal systems: Steam turbines, gas turbines, fuels cells, gasification, refrigeration and heat pumps for both conventional fossil fuels and different types of biomass. DNA is applicable for models of both steady state and dynamic operation. The program decides at runtime to apply the DAE solver if the system contains differential equations. This makes it easy to extend an existing steady state model to simulate dynamic operation of the plant. The use of the program is illustrated by examples of gas turbine models. The paper also gives an overview of the recent implementation of DNA as a Matlab extension (Mex).", "title": "" }, { "docid": "296e9204869a3a453dd304fc3b4b8c4b", "text": "Today, travelers are provided large amount information which includes Web sites and tourist magazines about introduction of tourist spot. However, it is not easy for users to process the information in a short time. Therefore travelers prefer to receive pertinent information easier and have that information presented in a clear and concise manner. This paper proposes a personalization method for tourist Point of Interest (POI) Recommendation.", "title": "" }, { "docid": "639b9ff274e5242c4bfc6a99d9c6963e", "text": "Construction management suffers from many problems which need to be solved or better understood. The research described in this paper evaluates the effectiveness of implementing the Last Planner System (LPS) to improve construction planning practice and enhance site management in the Saudi construction industry. To do so, LPS was implemented in two large state-owned construction projects through an action research process. The data collection methods employed included interviews, observations and a survey questionnaire. The findings identify major benefits covering many aspects of project management, including improved construction planning, enhanced site management and better communication and coordination between the parties involved. The fact that the structural work in one of the projects studied was completed two weeks ahead of schedule provides evidence of improvement of the specific site construction planning practices. The paper also describes barriers to the realisation the full potential of LPS, including the involvement of many subcontractors and people’s commitment and attitude to time.", "title": "" }, { "docid": "7dcc7cdff8a9196c716add8a1faf0203", "text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.", "title": "" }, { "docid": "afa915b558294481472cdb554442c44c", "text": "In order to automate routine fecal examination for parasitic diseases, the authors propose in this study a computer processing algorithm using digital image processing techniques and an artificial neural network (ANN) classifier The morphometric characteristics of eggs of human parasites in fecal specimens were extracted from microscopic images through digital image processing. An ANN then identified the parasite species based on those characteristics. The authors selected four morphometric features based on three morphological characteristics representing shape, shell smoothness, and size. A total of 82 microscopic images containing seven common human helminth eggs were used. The first stage (ANN-1) of the proposed ANN classification system isolated eggs from confusing artifacts. The second stage (ANN-2) classified eggs by species. The performance of ANN was evaluated by the tenfold cross-validation method to obviate the dependency on the selection of training samples. Cross-validation results showed 86.1% average correct classification ratio for ANN-1 and 90.3% for ANN-2 with small variances of 46.0 and 39.0, respectively. The algorithm developed will be an essential part of a completely automated fecal examination system.", "title": "" }, { "docid": "6d5e80293931396556cf5fbe64e9c2d2", "text": "Rotors of electrical high speed machines are subject to high stress, limiting the rated power of the machines. This paper describes the design process of a high-speed rotor of a Permanent Magnet Synchronous Machine (PMSM) for a rated power of 10kW at 100,000 rpm. Therefore, at the initial design the impact of the rotor radius to critical parameters is analyzed analytically. In particular, critical parameters are mechanical stress due to high centrifugal forces and natural bending frequencies. Furthermore, air friction losses, heating the rotor and the stator additionally, are no longer negligible compared to conventional machines and must be considered in the design process. These mechanical attributes are controversial to the electromagnetic design, increasing the effective magnetic air gap, for example. Thus, investigations are performed to achieve sufficient mechanical strength without a significant reduction of air gap flux density or causing thermal problems. After initial design by means of analytical estimations, an optimization of rotor geometry and materials is performed by means of the finite element method (FEM).", "title": "" }, { "docid": "63c1747c8803802e9d4cbc7d6231fa1a", "text": "Crowdfunding is an alternative model for project financing, whereby a large and dispersed audience participates through relatively small financial contributions, in exchange for physical, financial or social rewards. It is usually done via Internet-based platforms that act as a bridge between the crowd and the projects. Over the past few years, academics have explored this topic, both empirically and theoretically. However, the mixed findings and array of theories used have come to warrant a critical review of past works. To this end, we perform a systematic review of the literature on crowdfunding and seek to extract (1) the key management theories that have been applied in the context of crowdfunding and how these have been extended, and (2) the principal factors contributing to success for the different crowdfunding models, where success entails both fundraising and timely repayment. In the process, we offer a comprehensive definition of crowdfunding and identify avenues for future research based on the gaps and conflicting results in the literature.", "title": "" }, { "docid": "52357eff7eda659bcf225d0ab70cb8d2", "text": "BACKGROUND\nFlexibility is an important physical quality. Self-myofascial release (SMFR) methods such as foam rolling (FR) increase flexibility acutely but how long such increases in range of motion (ROM) last is unclear. Static stretching (SS) also increases flexibility acutely and produces a cross-over effect to contralateral limbs. FR may also produce a cross-over effect to contralateral limbs but this has not yet been identified.\n\n\nPURPOSE\nTo explore the potential cross-over effect of SMFR by investigating the effects of a FR treatment on the ipsilateral limb of 3 bouts of 30 seconds on changes in ipsilateral and contralateral ankle DF ROM and to assess the time-course of those effects up to 20 minutes post-treatment.\n\n\nMETHODS\nA within- and between-subject design was carried out in a convenience sample of 26 subjects, allocated into FR (n=13) and control (CON, n=13) groups. Ankle DF ROM was recorded at baseline with the in-line weight-bearing lunge test for both ipsilateral and contralateral legs and at 0, 5, 10, 15, 20 minutes following either a two-minute seated rest (CON) or 3 3 30 seconds of FR of the plantar flexors of the dominant leg (FR). Repeated measures ANOVA was used to examine differences in ankle DF ROM.\n\n\nRESULTS\nNo significant between-group effect was seen following the intervention. However, a significant within-group effect (p<0.05) in the FR group was seen between baseline and all post-treatment time-points (0, 5, 10, 15 and 20 minutes). Significant within-group effects (p<0.05) were also seen in the ipsilateral leg between baseline and at all post-treatment time-points, and in the contralateral leg up to 10 minutes post-treatment, indicating the presence of a cross-over effect.\n\n\nCONCLUSIONS\nFR improves ankle DF ROM for at least 20 minutes in the ipsilateral limb and up to 10 minutes in the contralateral limb, indicating that FR produces a cross-over effect into the contralateral limb. The mechanism producing these cross-over effects is unclear but may involve increased stretch tolerance, as observed following SS.\n\n\nLEVELS OF EVIDENCE\n2c.", "title": "" }, { "docid": "c71635ec5c0ef83c850cab138330f727", "text": "Academic institutions are now drawing attention in finding methods for making effective learning process, for identifying learner’s achievements and weakness, for tracing academic progress and also for predicting future performance. People’s increased expectation for accountability and transparency makes it necessary to implement big data analytics in the educational institution. But not all the educationalist and administrators are ready to take the challenge. So, it is now obvious to know about the necessity and opportunity as well as challenges of implementing big data analytics. This paper will describe the needs, opportunities and challenges of implementing big data analytics in the education sector.", "title": "" }, { "docid": "4f7b6ad29f8a6cbe871ed5a6a9e75896", "text": "Copyright: © 2017. The Author(s). Licensee: AOSIS. This work is licensed under the Creative Commons Attribution License. Introduction Glaucoma is an optic neuropathy that sometimes results in irreversible blindness.1 After cataracts, glaucoma is the second most prevalent cause of global blindness,2 and it is estimated that almost 80 million people worldwide will be affected by this optic neuropathy by the year 2020.3 Because of the high prevalence of this ocular disease, the economic and social implications of glaucoma have been outlined in recent studies.4,5 In Africa, primary open-angle glaucoma (POAG) is more prevalent than primary-angle closure glaucoma, and over the next 4 years, the prevalence of POAG in Africa is projected to increase by 23% corresponding to an increase from 6.2 million to 8.0 million affected individuals.3 Consequently, in Africa, there have been recommendations to incorporate glaucoma screening procedures into routine eye examinations as well as implement glaucoma blindness control programs.6,7", "title": "" }, { "docid": "45df032a26dc7a27ed6f68cea5f7c033", "text": "Computer animation of articulated figures can be tedious, largely due to the amount of data which must be specified at each frame. Animation techniques range from simple interpolation between keyframed figure poses to higher-level algorithmic models of specific movement patterns. The former provides the animator with complete control over the movement, whereas the latter may provide only limited control via some high-level parameters incorporated into the model. Inverse kinematic techniques adopted from the robotics literature have the potential to relieve the animator of detailed specification of every motion parameter within a figure, while retaining complete control over the movement, if desired. This work investigates the use of inverse kinematics and simple geometric constraints as tools for the animator. Previous applications of inverse kinematic algorithms to conlputer animation are reviewed. A pair of alternative algorithms suitable for a direct manipulation interface are presented and qualitatively compared. Application of these algorithms to enforce simple geometric constraints on a figure during interactive manipulation is discussed. An implementation of one of these algorithms within an existing figure animation editor is described, which provides constrained inverse kinematic figure manipulation for the creation of keyframes.", "title": "" }, { "docid": "61e6cf5d3a9f9faae4ebdb2cfc4d1ba8", "text": "We present the FuSSO, a functional analogue to the LASSO, that efficiently finds a sparse set of functional input covariates to regress a real-valued response against. The FuSSO does so in a semi-parametric fashion, making no parametric assumptions about the nature of input functional covariates and assuming a linear form to the mapping of functional covariates to the response. We provide a statistical backing for use of the FuSSO via proof of asymptotic sparsistency under various conditions. Furthermore, we observe good results on both synthetic and real-world data.", "title": "" }, { "docid": "d8915ec23a271a7497d3d155c9b32193", "text": "AIM\nThis article is a report of a study conducted to explore the relationship between sources of stress and psychological well-being and to consider how different sources of stress and coping resources might function as moderators and mediators on well-being.\n\n\nBACKGROUND\nIn most research exploring sources of stress and coping in nursing students, stress has been construed as psychological distress. Sources of stress likely to enhance well-being and, by implication, learning have not been considered.\n\n\nMETHOD\nA questionnaire was administered to 171 final year nursing students in 2008. Questions were asked to measure sources of stress when rated as likely to contribute to distress (a hassle) and rated as likely to help one achieve (an uplift). Support, control, self-efficacy and coping style were also measured, along with their potential moderating and mediating effects on well-being, operationalized using the General Health Questionnaire and measures of course and career satisfaction.\n\n\nFINDINGS\nSources of stress likely to lead to distress were more often predictors of well-being than were sources of stress likely to lead to positive, eustress states, with the exception of clinical placement demands. Self-efficacy, dispositional control and support were important predictors, and avoidance coping was the strongest predictor of adverse well-being. Approach coping was not a predictor of well-being. The mere presence of support appeared beneficial as well as the utility of that support to help a student cope.\n\n\nCONCLUSION\nInitiatives to promote support and self-efficacy are likely to have immediate benefits for student well-being. In course reviews, nurse educators need to consider how students' experiences might contribute not just to potential distress, but to eustress as well.", "title": "" }, { "docid": "27eaa5fe0c9684337ce8b6da9de9a8ed", "text": "When we observe someone performing an action, do our brains simulate making that action? Acquired motor skills offer a unique way to test this question, since people differ widely in the actions they have learned to perform. We used functional magnetic resonance imaging to study differences in brain activity between watching an action that one has learned to do and an action that one has not, in order to assess whether the brain processes of action observation are modulated by the expertise and motor repertoire of the observer. Experts in classical ballet, experts in capoeira and inexpert control subjects viewed videos of ballet or capoeira actions. Comparing the brain activity when dancers watched their own dance style versus the other style therefore reveals the influence of motor expertise on action observation. We found greater bilateral activations in premotor cortex and intraparietal sulcus, right superior parietal lobe and left posterior superior temporal sulcus when expert dancers viewed movements that they had been trained to perform compared to movements they had not. Our results show that this 'mirror system' integrates observed actions of others with an individual's personal motor repertoire, and suggest that the human brain understands actions by motor simulation.", "title": "" }, { "docid": "d40a55317d8cdebfcd567ea11ad0960f", "text": "This study examined the effects of self-presentation goals on the amount and type of verbal deception used by participants in same-gender and mixed-gender dyads. Participants were asked to engage in a conversation that was secretly videotaped. Self-presentational goal was manipulated, where one member of the dyad (the self-presenter) was told to either appear (a) likable, (b) competent, or (c) was told to simply get to know his or her partner (control condition). After the conversation, self-presenters were asked to review a video recording of the interaction and identify the instances in which they had deceived the other person. Overall, participants told more lies when they had a goal to appear likable or competent compared to participants in the control condition, and the content of the lies varied according to self-presentation goal. In addition, lies told by men and women differed in content, although not in quantity.", "title": "" }, { "docid": "c11b77f1392c79f4a03f9633c8f97f4d", "text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.", "title": "" } ]
scidocsrr
9b5598ef047b929a67f8816e60c1a55b
Additional Multi-Touch Attribution for Online Advertising
[ { "docid": "5ff8d6415a2601afdc4a15c13819f5bb", "text": "This paper studies the e ects of various types of online advertisements on purchase conversion by capturing the dynamic interactions among advertisement clicks themselves. It is motivated by the observation that certain advertisement clicks may not result in immediate purchases, but they stimulate subsequent clicks on other advertisements which then lead to purchases. We develop a stochastic model based on mutually exciting point processes, which model advertisement clicks and purchases as dependent random events in continuous time. We incorporate individual random e ects to account for consumer heterogeneity and cast the model in the Bayesian hierarchical framework. We propose a new metric of conversion probability to measure the conversion e ects of online advertisements. Simulation algorithms for mutually exciting point processes are developed to evaluate the conversion probability and for out-of-sample prediction. Model comparison results show the proposed model outperforms the benchmark model that ignores exciting e ects among advertisement clicks. We nd that display advertisements have relatively low direct e ect on purchase conversion, but they are more likely to stimulate subsequent visits through other advertisement formats. We show that the commonly used measure of conversion rate is biased in favor of search advertisements and underestimates the conversion e ect of display advertisements the most. Our model also furnishes a useful tool to predict future purchases and clicks on online", "title": "" } ]
[ { "docid": "76cc47710ab6fa91446844368821c991", "text": "Recommender systems (RSs) have been successfully applied to alleviate the problem of information overload and assist users' decision makings. Multi-criteria recommender systems is one of the RSs which utilizes users' multiple ratings on different aspects of the items (i.e., multi-criteria ratings) to predict user preferences. Traditional approaches simply treat these multi-criteria ratings as addons, and aggregate them together to serve for item recommendations. In this paper, we propose the novel approaches which treat criteria preferences as contextual situations. More specifically, we believe that part of multiple criteria preferences can be viewed as contexts, while others can be treated in the traditional way in multi-criteria recommender systems. We compare the recommendation performance among three settings: using all the criteria ratings in the traditional way, treating all the criteria preferences as contexts, and utilizing selected criteria ratings as contexts. Our experiments based on two real-world rating data sets reveal that treating criteria preferences as contexts can improve the performance of item recommendations, but they should be carefully selected. The hybrid model of using selected criteria preferences as contexts and the remaining ones in the traditional way is finally demonstrated as the overall winner in our experiments.", "title": "" }, { "docid": "ce29ddfd7b3d3a28ddcecb7a5bb3ac8e", "text": "Steganography consist of concealing secret information in a cover object to be sent over a public communication channel. It allows two parties to share hidden information in a way that no intruder can detect the presence of hidden information. This paper presents a novel steganography approach based on pixel location matching of the same cover image. Here the information is not directly embedded within the cover image but a sequence of 4 bits of secret data is compared to the 4 most significant bits (4MSB) of the cover image pixels. The locations of the matching pixels are taken to substitute the 2 least significant bits (2LSB) of the cover image pixels. Since the data are not directly hidden in cover image, the proposed approach is more secure and difficult to break. Intruders cannot intercept it by using common LSB techniques.", "title": "" }, { "docid": "10d8bbea398444a3fb6e09c4def01172", "text": "INTRODUCTION\nRecent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act of 2011.\n\n\nMETHOD\nThe current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005-2009.\n\n\nRESULTS\nResults show that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because of inattentive and risky driving.", "title": "" }, { "docid": "64c2b9f59a77f03e6633e5804356e9fc", "text": "AbstructWe present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (Le., two extra disks) is based on ReedSolomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error.", "title": "" }, { "docid": "6627a1d89adf1389959983d04c8c26dd", "text": "Recent models of procrastination due to self-control problems assume that a procrastinator considers just one option and is unaware of her self-control problems. We develop a model where a person chooses from a menu of options and is partially aware of her self-control problems. This menu model replicates earlier results and generates new ones. A person might forego completing an attractive option because she plans to complete a more attractive but never-to-be-completed option. Hence, providing a non-procrastinator additional options can induce procrastination, and a person may procrastinate worse pursuing important goals than unimportant ones.", "title": "" }, { "docid": "5ceb6e39c8f826c0a7fd0e5086090a5f", "text": "Mobile botnet phenomenon is gaining popularity among malware writers in order to exploit vulnerabilities in smartphones. In particular, mobile botnets enable illegal access to a victim’s smartphone, can compromise critical user data and launch a DDoS attack through Command and Control (C&C). In this article, we propose a static analysis approach, DeDroid, to investigate botnet-specific properties that can be used to detect mobile applications with botnet intensions. Initially, we identify critical features by observing code behavior of the few known malware binaries having C&C features. Then, we compare the identified features with the malicious and benign applications of Drebin dataset. The results show against the comparative analysis that, Drebin dataset has 35% malicious applications which qualify as botnets. Upon closer examination, 90% of the potential botnets are confirmed as botnets. Similarly, for comparative analysis against benign applications having C&C features, DeDroid has achieved adequate detection accuracy. In addition, DeDroid has achieved high accuracy with negligible false positive rate while making decision for state-of-the-art malicious applications.", "title": "" }, { "docid": "ac37ca6b8bb12305ac6e880e6e7c336a", "text": "In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level of the graph. We solve the sparse graph learning problem given some training data in both the noiseless and noisy settings. Given the true smooth data, the posed sparse graph learning problem can be solved optimally and is based on simple rank ordering. Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.", "title": "" }, { "docid": "b91291a9b64ef7668633c2a3df82285a", "text": "Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap.", "title": "" }, { "docid": "888a58ccee0297f2c6f8eb9e31383cc0", "text": "A Business Intelligence (BI) system is a technology that provides significant business value by improving the effectiveness of managerial decision-making. In an uncertain and highly competitive business environment, the value of strategic information systems such as these is easily recognised. High adoption rates and investment in BI software and services suggest that these systems are a principal provider of decision support in the current marketplace. Most business investments are screened using some form of evaluation process or technique. The benefits of BI are such that traditional evaluation techniques have difficulty in identifying the soft, intangible benefits often provided by BI. This paper, forming the first part of a larger research project, aims to review current evaluation techniques that address intangible benefits, presents issues relating to the evaluation of BI in industry, and suggests a research agenda to advance what is presently a limited body of knowledge relating to the evaluation of BI intangible benefits.", "title": "" }, { "docid": "dcda412c18e92650d9791023f13e4392", "text": "Graph can straightforwardly represent the relations between the objects, which inevitably draws a lot of attention of both academia and industry. Achievements mainly concentrate on homogeneous graph and bipartite graph. However, it is difficult to use existing algorithm in actual scenarios. Because in the real world, the type of the objects and the relations are diverse and the amount of the data can be very huge. Considering of the characteristics of \"black market\", we proposeHGsuspector, a novel and scalable algorithm for detecting collective fraud in directed heterogeneous graphs.We first decompose directed heterogeneous graphs into a set of bipartite graphs, then we define a metric on each connected bipartite graph and calculate scores of it, which fuse the structure information and event probability. The threshold for distinguishing between normal and abnormal can be obtained by statistic or other anomaly detection algorithms in scores space. We also provide a technical solution for fraud detection in e-commerce scenario, which has been successfully applied in Jingdong e-commerce platform to detect collective fraud in real time. The experiments on real-world datasets, which has billion nodes and edges, demonstrate that HGsuspector is more accurate and fast than the most practical and state-of-the-art approach by far.", "title": "" }, { "docid": "9585a35e333231ed32871bcb6e7e1002", "text": "Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it an ideal candidate for performing Bayesian learning on large models in large-scale dataset settings. However, EP has a crucial limitation in this context: the number of approximating factors needs to increase with the number of datapoints, N , which often entails a prohibitively large memory overhead. This paper presents an extension to EP, called stochastic expectation propagation (SEP), that maintains a global posterior approximation (like VI) but updates it in a local way (like EP). Experiments on a number of canonical learning problems using synthetic and real-world datasets indicate that SEP performs almost as well as full EP, but reduces the memory consumption by a factor of N . SEP is therefore ideally suited to performing approximate Bayesian learning in the large model, large dataset setting.", "title": "" }, { "docid": "4e2fbac1742c7afe9136e274150d6ee9", "text": "Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper addresses a new issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples, and proposes a novel generative model for embedding, TransG. The new model can discover latent semantics for a relation and leverage a mixture of relation-specific component vectors to embed a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, which is able to deal with multiple relation semantics. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines.", "title": "" }, { "docid": "e8403145a3d4a8a75348075410683e28", "text": "This paper presents a current-reuse complementary-input (CRCI) telescopic-cascode chopper stabilized amplifier with low-noise low-power operation. The current-reuse complementary-input strategy doubles the amplifier's effective transconductance by full current-reuse between complementary inputs, which significantly improves the noise-power efficiency. A pseudo-resistor based integrator is used in the DC servo loop to generate a high-pass cutoff below 1 Hz. The proposed amplifier features a mid-band gain of 39.25 dB, bandwidth from 0.12 Hz to 7.6 kHz, and draws 2.57 μA from a 1.2-V supply and exhibits an input-referred noise of 3.57 μVrms integrated from 100 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 2.5. The amplifier is designed in 0.13 μm 8-metal CMOS process.", "title": "" }, { "docid": "6400b594b7a7624cf638961ee904e7d0", "text": "As the demands for portable electronic products increase, through-silicon-via (TSV)-based three-dimensional integrated-circuit (3-D IC) integration is becoming increasingly important. Micro-bump-bonded interconnection is one approach that has great potential to meet this requirement. In this paper, a 30-μm pitch chip-to-chip (C2C) interconnection with Cu/Ni/SnAg micro bumps was assembled using the gap-controllable thermal bonding method. The bonding parameters were evaluated by considering the variation in the contact resistance after bonding. The effects of the bonding time and temperature on the IMC thickness of the fabricated C2C interconnects are also investigated to determine the correlation between its thickness and reliability performance. The reliability of the C2C interconnects with the selected underfill was studied by performing a -55°C- 125°C temperature cycling test (TCT) for 2000 cycles and a 150°C high-temperature storage (HTS) test for 2000 h. The interfaces of the failed samples in the TCT and HTS tests are then inspected by scanning electron microscopy (SEM), which is utilized to obtain cross-sectional images. To validate the experimental results, finite-element (FE) analysis is also conducted to elucidate the interconnect reliability of the C2C interconnection. Results show that consistent bonding quality and stable contact resistance of the fine-pitch C2C interconnection with the micro bumps were achieved by giving the appropriate choice of the bonding parameters, and those bonded joints can thus serve as reliable interconnects for use in 3-D chip stacking.", "title": "" }, { "docid": "415e704d747a00447cc1ec8fa0ff2a3d", "text": "We propose a new method to detect when users express the intent to leave a service, also known as churn. While previous work focuses solely on social media, we show that this intent can be detected in chatbot conversations. As companies increasingly rely on chatbots, they need an overview of potentially churny users. To this end, we crowdsource and publish a dataset of churn intent expressions in chatbot interactions in German and English. We show that classifiers trained on social media data can detect the same intent in the context of chatbots. We introduce a classification architecture that outperforms existing work on churn intent detection in social media. Moreover, we show that, using bilingual word embeddings, a system trained on combined English and German data outperforms monolingual approaches. As the only existing dataset is in English, we crowdsource and publish a novel dataset of German tweets. We thus underline the universal aspect of the problem, as examples of churn intent in English help us identify churn in German tweets and chatbot conversations.", "title": "" }, { "docid": "01f3f3b3693940963f5f2c4f71585a2a", "text": "BACKGROUND\nStress and anxiety are widely considered to be causally related to alcohol craving and consumption, as well as development and maintenance of alcohol use disorder (AUD). However, numerous preclinical and human studies examining effects of stress or anxiety on alcohol use and alcohol-related problems have been equivocal. This study examined relationships between scores on self-report anxiety, anxiety sensitivity, and stress measures and frequency and intensity of recent drinking, alcohol craving during early withdrawal, as well as laboratory measures of alcohol craving and stress reactivity among heavy drinkers with AUD.\n\n\nMETHODS\nMedia-recruited, heavy drinkers with AUD (N = 87) were assessed for recent alcohol consumption. Anxiety and stress levels were characterized using paper-and-pencil measures, including the Beck Anxiety Inventory (BAI), the Anxiety Sensitivity Index-3 (ASI-3), and the Perceived Stress Scale (PSS). Eligible subjects (N = 30) underwent alcohol abstinence on the Clinical Research Unit; twice daily measures of alcohol craving were collected. On day 4, subjects participated in the Trier Social Stress Test; measures of cortisol and alcohol craving were collected.\n\n\nRESULTS\nIn multivariate analyses, higher BAI scores were associated with lower drinking frequency and reduced drinks/drinking day; in contrast, higher ASI-3 scores were associated with higher drinking frequency. BAI anxiety symptom and ASI-3 scores also were positively related to Alcohol Use Disorders Identification Test total scores and AUD symptom and problem subscale measures. Higher BAI and ASI-3 scores but not PSS scores were related to greater self-reported alcohol craving during early alcohol abstinence. Finally, BAI scores were positively related to laboratory stress-induced cortisol and alcohol craving. In contrast, the PSS showed no relationship with most measures of alcohol craving or stress reactivity.\n\n\nCONCLUSIONS\nOverall, clinically oriented measures of anxiety compared with perceived stress were more strongly associated with a variety of alcohol-related measures in current heavy drinkers with AUD.", "title": "" }, { "docid": "7e6a3a04c24a0fc24012619d60ebb87b", "text": "The recent trend toward democratization in countries throughout the globe has challenged scholars to pursue two potentially contradictory goals: to develop a differentiated conceptualization of democracy that captures the diverse experiences of these countries; and to extend the analysis to this broad range of cases without ‘stretching’ the concept. This paper argues that this dual challenge has led to a proliferation of conceptual innovations, including hundreds of subtypes of democracy—i.e., democracy ‘with adjectives.’ The paper explores the strengths and weaknesses of three important strategies of innovation that have emerged: ‘precising’ the definition of democracy; shifting the overarching concept with which democracy is associated; and generating various forms of subtypes. Given the complex structure of meaning produced by these strategies for refining the concept of democracy, we conclude by offering an old piece of advice with renewed urgency: It is imperative that scholars situate themselves in relation to this structure of meaning by clearly defining and explicating the conception of democracy they are employing.", "title": "" }, { "docid": "376bac86251e8a1f8bc0b3af2629f900", "text": "The security of software systems can be threatened by many internal and external threats, including data leakages due to timing channels. Even if developers manage to avoid security threats in the source code or bytecode during development and testing, new threats can arise as the compiler generates machine codes from representations at the binary code level during execution on the processor or due to operating system specifics. Current approaches either do not allow the neutralization of timing channels to be achieved comprehensively with a sufficient degree of security or require an unjustifiable amount of time and/or resources. Herein, a method is demonstrated for the protected execution of software based on a secure virtual execution environment (VEE) that combines the results from dynamic and static analyses to find timing channels through the application of code transformations. This solution complements other available techniques to prevent timing channels from being exploited. This approach helps control the appearance and neutralization of timing channels via just-in-time code modifications during all stages of program development and usage. This work demonstrates the identification of threats using timing channels as an example. The approach presented herein can be expanded to the neutralization of other types of threats.", "title": "" }, { "docid": "c6024a99572523d7cec86769e144937f", "text": "Trolling’ refers to a specific type of malicious online behaviour, intended to disrupt interactions, aggravate interactional partners and lure them into fruitless argumentation. However, as with other categories, both ‘troll’ and ‘trolling’ may have multiple, inconsistent and incompatible meanings, depending upon the context in which the term is used and the aims of the person using the term. Drawing data from 15 online fora and newspaper comment threads, this paper explores how online users mobilise and make use of the term ‘troll’. Data was analysed from a discursive psychological perspective. Four repertoires describing trolls were identified in posters online messages: 1) that trolls are easily identifiable, 2) nostalgia, 3) vigilantism and 4) that trolls are nasty. A final theme follows these repertoires – that of identifying trolls. Analysis also revealed that despite repertoire 01, identifying trolls is not a simple and straight-forward task. Similarly to any other rhetorical category, there are tensions inherent in posters accounts of nature and acceptability of trolling. Neither the category ‘troll’ nor the action of ‘trolling’ has a single, fixed meaning. Either action may be presented as desirable or undesirable, depending upon the aims of the poster at the time of posting.", "title": "" } ]
scidocsrr
fd88d796190e7bc02f3d21ced031eca7
Meta Learning Shared Hierarchies
[ { "docid": "01d34357d5b8dbf4b89d3f8683f6fc58", "text": "Reinforcement learning (RL), while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces. The autonomous decomposition of tasks and use of hierarchical methods hold the potential to significantly speed up learning in such domains. This paper proposes a novel practical method that can autonomously decompose tasks, by leveraging association rule mining, which discovers hidden relationship among entities in data mining. We introduce a novel method called ARM-HSTRL (Association Rule Mining to extract Hierarchical Structure of Tasks in Reinforcement Learning). It extracts temporal and structural relationships of sub-goals in RL, and multi-task RL. In particular,it finds sub-goals and relationship among them. It is shown the significant efficiency and performance of the proposed method in two main topics of RL.", "title": "" } ]
[ { "docid": "afe26c28b56a511452096bfc211aed97", "text": "System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly Object Constraint Language (OCL) expressions across all these artifacts. Our goal here is to support the derivation of functional system test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information. In this paper, we describe a methodology in a practical way and illustrate it with an example. In this context, we address testability and automation issues, as the ultimate goal is to fully support system testing activities with high-capability tools.", "title": "" }, { "docid": "623cdf022d333ca4d6b244f54d301650", "text": "Alveolar rhabdomyosarcoma (ARMS) are aggressive soft tissue tumors harboring specific fusion transcripts, notably PAX3-FOXO1 (P3F). Current therapy concepts result in unsatisfactory survival rates making the search for innovative approaches necessary: targeting PAX3-FOXO1 could be a promising strategy. In this study, we developed integrin receptor-targeted Lipid-Protamine-siRNA (LPR) nanoparticles using the RGD peptide and validated target specificity as well as their post-silencing effects. We demonstrate that RGD-LPRs are specific to ARMS in vitro and in vivo. Loaded with siRNA directed against the breakpoint of P3F, these particles efficiently down regulated the fusion transcript and inhibited cell proliferation, but did not induce substantial apoptosis. In a xenograft ARMS model, LPR nanoparticles targeting P3F showed statistically significant tumor growth delay as well as inhibition of tumor initiation when injected in parallel with the tumor cells. These findings suggest that RGD-LPR targeting P3F are promising to be highly effective in the setting of minimal residual disease for ARMS.", "title": "" }, { "docid": "9af22f6a1bbb4cbb13508b654e5fd7a5", "text": "We present a 3-D correspondence method to match the geometric extremities of two shapes which are partially isometric. We consider the most general setting of the isometric partial shape correspondence problem, in which shapes to be matched may have multiple common parts at arbitrary scales as well as parts that are not similar. Our rank-and-vote-and-combine algorithm identifies and ranks potentially correct matches by exploring the space of all possible partial maps between coarsely sampled extremities. The qualified top-ranked matchings are then subjected to a more detailed analysis at a denser resolution and assigned with confidence values that accumulate into a vote matrix. A minimum weight perfect matching algorithm is finally iterated to combine the accumulated votes into an optimal (partial) mapping between shape extremities, which can further be extended to a denser map. We test the performance of our method on several data sets and benchmarks in comparison with state of the art.", "title": "" }, { "docid": "fd4bae3bcb2a388e7203fc6c2f9cde6c", "text": "Sign language recognition is helpful in communication between signing people and non-signing people. Various research projects are in progress on different sign language recognition systems worldwide. The research is limited to a particular country as there are country wide variations available. The idea of this project is to design a system that can interpret the Indian sign language in the domain of numerals accurately so that the less fortunate people will be able to communicate with the outside world without need of an interpreter in public places like railway stations, banks, etc. The research presented here describes a system for automatic recognition of Indian sign language of numeric signs which are in the form of isolated images, in which only a regular camera was used to acquire the signs. To use the project in real environment, first we created a numeric sign database containing 5000 signs, 500 images per numeral sign. Direct pixel value and hierarchical centroid techniques are used to extract desired features from sign images. After extracting features from images, neural network and kNN classification techniques were used to classify the signs. The result of these experiments is achieved up to 97.10% accuracy.", "title": "" }, { "docid": "10a0f370ad3e9c3d652e397860114f90", "text": "Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions: choropleth maps, cartograms, and proportional symbol maps. However, all these maps suffer from limitations, especially if large data values are associated with small regions. To overcome these limitations, we propose a novel type of quantitative thematic map, the necklace map. In a necklace map, the regions of the underlying two-dimensional map are projected onto intervals on a one-dimensional curve (the necklace) that surrounds the map regions. Symbols are scaled such that their area corresponds to the data of their region and placed without overlap inside the corresponding interval on the necklace. Necklace maps appear clear and uncluttered and allow for comparatively large symbol sizes. They visualize data sets well which are not proportional to region sizes. The linear ordering of the symbols along the necklace facilitates an easy comparison of symbol sizes. One map can contain several nested or disjoint necklaces to visualize clustered data. The advantages of necklace maps come at a price: the association between a symbol and its region is weaker than with other types of maps. Interactivity can help to strengthen this association if necessary. We present an automated approach to generate necklace maps which allows the user to interactively control the final symbol placement. We validate our approach with experiments using various data sets and maps.", "title": "" }, { "docid": "e8f89e651007c7f3a20c1f0c6864ea9f", "text": "We present the design and implementation of a quadrotor tail-sitter Vertical Take-Off and Landing (VTOL) Unmanned Aerial Vehicle (UAV). The VTOL UAV combines the advantage of a quadrotor, vertical take-off and landing and hovering at a stationary point, with that of a fixed-wing, efficient level flight. We describe our vehicle design with special considerations on fully autonomous operation in a real outdoor environment where the wind is present. The designed quadrotor tail-sitter UAV has insignificant vibration level and achieves stable hovering and landing performance when a cross wind is present. Wind tunnel test is conducted to characterize the full envelope aerodynamics of the aircraft, based on which a flight controller is designed, implemented and tested. MATLAB simulation is presented and shows that our vehicle can achieve a continuous transition from hover flight to level flight. Finally, both indoor and outdoor flight experiments are conducted to verify the performance of our vehicle and the designed controller.", "title": "" }, { "docid": "fbcc3a5535d63e5a6dfb4e66bd5d7ad5", "text": "Jihadist groups such as ISIS are spreading online propaganda using various forms of social media such as Twitter and YouTube. One of the most common approaches to stop these groups is to suspend accounts that spread propaganda when they are discovered. This approach requires that human analysts manually read and analyze an enormous amount of information on social media. In this work we make a first attempt to automatically detect messages released by jihadist groups on Twitter. We use a machine learning approach that classifies a tweet as containing material that is supporting jihadists groups or not. Even tough our results are preliminary and more tests needs to be carried out we believe that results indicate that an automated approach to aid analysts in their work with detecting radical content on social media is a promising way forward. It should be noted that an automatic approach to detect radical content should only be used as a support tool for human analysts in their work.", "title": "" }, { "docid": "7071a178d42011a39145066da2d08895", "text": "This paper discusses the trend modeling for traffic time series. First, we recount two types of definitions for a long-term trend that appeared in previous studies and illustrate their intrinsic differences. We show that, by assuming an implicit temporal connection among the time series observed at different days/locations, the PCA trend brings several advantages to traffic time series analysis. We also describe and define the so-called short-term trend that cannot be characterized by existing definitions. Second, we sequentially review the role that trend modeling plays in four major problems in traffic time series analysis: abnormal data detection, data compression, missing data imputation, and traffic prediction. The relations between these problems are revealed, and the benefit of detrending is explained. For the first three problems, we summarize our findings in the last ten years and try to provide an integrated framework for future study. For traffic prediction problem, we present a new explanation on why prediction accuracy can be improved at data points representing the short-term trends if the traffic information from multiple sensors can be appropriately used. This finding indicates that the trend modeling is not only a technique to specify the temporal pattern but is also related to the spatial relation of traffic time series.", "title": "" }, { "docid": "cf9fe52efd734c536d0a7daaf59a9bcd", "text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.", "title": "" }, { "docid": "0aa8a611e7ea7934e52a1cb2cd46a579", "text": "The software defined networking (SDN) paradigm promises to dramatically simplify network configuration and resource management. Such features are extremely valuable to network operators and therefore, the industrial (besides the academic) research and development community is paying increasing attention to SDN. Although wireless equipment manufacturers are increasing their involvement in SDN-related activities, to date there is not a clear and comprehensive understanding of what are the opportunities offered by SDN in most common networking scenarios involving wireless infrastructureless communications and how SDN concepts should be adapted to suit the characteristics of wireless and mobile communications. This paper is a first attempt to fill this gap as it aims at analyzing how SDN can be beneficial in wireless infrastructureless networking environments with special emphasis on wireless personal area networks (WPAN). Furthermore, a possible approach (called SDWN) for such environments is presented and some design guidelines are provided.", "title": "" }, { "docid": "f54b00ada7026004482a921bf61b634f", "text": "The purpose of this study was to examine how 1:1 laptop initiative affected student learning at a selected rural Midwestern high school. A total of 105 high school students enrolled in 10th–12th grades during the 2008–2009 school year participated in the study. A survey instrument created by the Mitchell Institute was modified and used to collect data on student perceptions and faculty perceptions of the impact of 1:1 laptop computing on student learning and instructional integration of technology in education. Study findings suggest that integration of 1:1 laptop computing positively impacts student academic engagement and student learning. Therefore, there is need for teachers to implement appropriate computing practices to enhance student learning. Additionally, teachers need to collaborate with their students to learn and understand various instructional technology applications beyond basic Internet browsing and word processing.", "title": "" }, { "docid": "f16ab00d323e4169117eecb72bcb330e", "text": "Despite the availability of various substance abuse treatments, alcohol and drug misuse and related negative consequences remain prevalent. Vipassana meditation (VM), a Buddhist mindfulness-based practice, provides an alternative for individuals who do not wish to attend or have not succeeded with traditional addiction treatments. In this study, the authors evaluated the effectiveness of a VM course on substance use and psychosocial outcomes in an incarcerated population. Results indicate that after release from jail, participants in the VM course, as compared with those in a treatment-as-usual control condition, showed significant reductions in alcohol, marijuana, and crack cocaine use. VM participants showed decreases in alcohol-related problems and psychiatric symptoms as well as increases in positive psychosocial outcomes. The utility of mindfulness-based treatments for substance use is discussed.", "title": "" }, { "docid": "df2944f4495357d63c9f3f00bc5bac32", "text": "BACKGROUND\nFrontal fibrosing alopecia is an uncommon condition characterized by progressive frontotemporal recession due to inflammatory destruction of hair follicles. Little is known about the natural history of this disease.\n\n\nOBJECTIVES\nTo determine the clinical features and natural history of frontal fibrosing alopecia.\n\n\nMETHODS\nWe studied the cases notes of patients diagnosed with frontal fibrosing alopecia from 1993 to 2008 at the Royal Hallamshire Hospital, Sheffield.\n\n\nRESULTS\nThere were 18 patients aged between 34 and 71 years. Three were premenopausal. All had frontotemporal recession with scarring. This was associated with partial or complete loss of eyebrows in 15 patients while four had hair loss at other sites. One had keratosis pilaris-like papules on the face, and one had follicular erythema on the cheeks. Three patients had oral lichen planus, of whom two also had cutaneous lichen planus affecting other sites of the body. Treatments given included intralesional triamcinolone acetonide, 0.1% tacrolimus ointment and oral hydroxychloroquine. Progression of frontotemporal recession was seen in some patients, but not all. In one patient the hair line receded by 30 mm over 72 months, whereas in another patient there was no positional change in the hair line after 15 years.\n\n\nCONCLUSIONS\nFrontal fibrosing alopecia is more common in postmenopausal women, but it can occur in younger women. It may be associated with mucocutaneous lichen planus. Recession of the hair line may progress inexorably over many years but this is not inevitable. It is not clear whether or not treatment alters the natural history of the disease - the disease stabilized with time in most of the patients with or without continuing treatment.", "title": "" }, { "docid": "187fcbf0a52de7dd7de30f8846b34e1e", "text": "Goal-oriented dialogue systems typically rely on components specifically developed for a single task or domain. This limits such systems in two different ways: If there is an update in the task domain, the dialogue system usually needs to be updated or completely re-trained. It is also harder to extend such dialogue systems to different and multiple domains. The dialogue state tracker in conventional dialogue systems is one such component — it is usually designed to fit a welldefined application domain. For example, it is common for a state variable to be a categorical distribution over a manually-predefined set of entities (Henderson et al., 2013), resulting in an inflexible and hard-to-extend dialogue system. In this paper, we propose a new approach for dialogue state tracking that can generalize well over multiple domains without incorporating any domain-specific knowledge. Under this framework, discrete dialogue state variables are learned independently and the information of a predefined set of possible values for dialogue state variables is not required. Furthermore, it enables adding arbitrary dialogue context as features and allows for multiple values to be associated with a single state variable. These characteristics make it much easier to expand the dialogue state space. We evaluate our framework using the widely used dialogue state tracking challenge data set (DSTC2) and show that our framework yields competitive results with other state-of-the-art results despite incorporating little domain knowledge. We also show that this framework can benefit from widely available external resources such as pre-trained word embeddings.", "title": "" }, { "docid": "b06b156699cae76a0695a923121ed860", "text": "A potent neurotrophic factor that enhances survival of midbrain dopaminergic neurons was purified and cloned. Glial cell line-derived neurotrophic factor (GDNF) is a glycosylated, disulfide-bonded homodimer that is a distantly related member of the transforming growth factor-beta superfamily. In embryonic midbrain cultures, recombinant human GDNF promoted the survival and morphological differentiation of dopaminergic neurons and increased their high-affinity dopamine uptake. These effects were relatively specific; GDNF did not increase total neuron or astrocyte numbers nor did it increase transmitter uptake by gamma-aminobutyric-containing and serotonergic neurons. GDNF may have utility in the treatment of Parkinson's disease, which is marked by progressive degeneration of midbrain dopaminergic neurons.", "title": "" }, { "docid": "15886d83be78940609c697b30eb73b13", "text": "Why is corruption—the misuse of public office for private gain— perceived to be more widespread in some countries than others? Different theories associate this with particular historical and cultural traditions, levels of economic development, political institutions, and government policies. This article analyzes several indexes of “perceived corruption” compiled from business risk surveys for the 1980s and 1990s. Six arguments find support. Countries with Protestant traditions, histories of British rule, more developed economies, and (probably) higher imports were less \"corrupt\". Federal states were more \"corrupt\". While the current degree of democracy was not significant, long exposure to democracy predicted lower corruption.", "title": "" }, { "docid": "57c780448d8771a0d22c8ed147032a71", "text": "“Social TV” is a term that broadly describes the online social interactions occurring between viewers while watching television. In this paper, we show that TV networks can derive value from social media content placed in shows because it leads to increased word of mouth via online posts, and it highly correlates with TV show related sales. In short, we show that TV event triggers change the online behavior of viewers. In this paper, we first show that using social media content on the televised American reality singing competition, The Voice, led to increased social media engagement during the TV broadcast. We then illustrate that social media buzz about a contestant after a performance is highly correlated with song sales from that contestant’s performance. We believe this to be the first study linking TV content to buzz and sales in real time.", "title": "" }, { "docid": "22bdd2c36ef72da312eb992b17302fbe", "text": "In this paper, we present an operational system for cyber threat intelligence gathering from various social platforms on the Internet particularly sites on the darknet and deepnet. We focus our attention to collecting information from hacker forum discussions and marketplaces offering products and services focusing on malicious hacking. We have developed an operational system for obtaining information from these sites for the purposes of identifying emerging cyber threats. Currently, this system collects on average 305 high-quality cyber threat warnings each week. These threat warnings include information on newly developed malware and exploits that have not yet been deployed in a cyber-attack. This provides a significant service to cyber-defenders. The system is significantly augmented through the use of various data mining and machine learning techniques. With the use of machine learning models, we are able to recall 92% of products in marketplaces and 80% of discussions on forums relating to malicious hacking with high precision. We perform preliminary analysis on the data collected, demonstrating its application to aid a security expert for better threat analysis.", "title": "" }, { "docid": "1b47dffdff3825ad44a0430311e2420b", "text": "The present paper describes the SSM algorithm of protein structure comparison in three dimensions, which includes an original procedure of matching graphs built on the protein's secondary-structure elements, followed by an iterative three-dimensional alignment of protein backbone Calpha atoms. The SSM results are compared with those obtained from other protein comparison servers, and the advantages and disadvantages of different scores that are used for structure recognition are discussed. A new score, balancing the r.m.s.d. and alignment length Nalign, is proposed. It is found that different servers agree reasonably well on the new score, while showing considerable differences in r.m.s.d. and Nalign.", "title": "" } ]
scidocsrr
cb2a722961036106e0aca64401de7218
Classified Stable Matching
[ { "docid": "e2a863f5407ce843af196c105adfb2fe", "text": "We study the Student-Project Allocation problem (SPA), a generalisation of the classical Hospitals / Residents problem (HR). An instance of SPA involves a set of students, projects and lecturers. Each project is offered by a unique lecturer, and both projects and lecturers have capacity constraints. Students have preferences over projects, whilst lecturers have preferences over students. We present two optimal linear-time algorithms for allocating students to projects, subject to the preference and capacity constraints. In particular, each algorithm finds a stable matching of students to projects. Here, the concept of stability generalises the stability definition in the HR context. The stable matching produced by the first algorithm is simultaneously best-possible for all students, whilst the one produced by the second algorithm is simultaneously best-possible for all lecturers. We also prove some structural results concerning the set of stable matchings in a given instance of SPA. The SPA problem model that we consider is very general and has applications to a range of different contexts besides student-project allocation.", "title": "" } ]
[ { "docid": "504776b83a292b320aaf0d0b02947d02", "text": "The combination of unique single nucleotide polymorphisms in the CCR5 regulatory and in the CCR2 and CCR5 coding regions, defined nine CCR5 human haplogroups (HH): HHA-HHE, HHF*1, HHF*2, HHG*1, and HHG*2. Here we examined the distribution of CCR5 HH and their association with HIV infection and disease progression in 36 HIV-seronegative and 76 HIV-seropositive whites from North America and Spain [28 rapid progressors (RP) and 48 slow progressors (SP)]. Although analyses revealed that HHE frequencies were similar between HIV-seronegative and HIV-seropositive groups (25.0% vs. 32.2%, p > 0.05), HHE frequency in RP was significantly higher than that in SP (48.2% vs. 22.9%, p = 0.002). Survival analysis also showed that HHE heterozygous and homozygous were associated with an accelerated CD4 cell count decline to less than 200 cells/microL (adjusted RH 2.44, p = 0.045; adjusted RH = 3.12, p = 0.037, respectively). These data provide further evidence that CCR5 human haplogroups influence HIV-1 disease progression in HIV-infected persons.", "title": "" }, { "docid": "4efc599a92ed83f54c31d87878e0000d", "text": "In this paper, we propose a dynamic virtual machine consolidation algorithm to minimize the number of active physical servers on a data center in order to reduce energy cost. The proposed dynamic consolidation method uses the k-nearest neighbor regression algorithm to predict resource usage in each host. Based on prediction utilization, the consolidation method can determine (i) when a host becomes over-utilized (ii) when a host becomes under-utilized. Experimental results on the real workload traces from more than a thousand Planet Lab virtual machines show that the proposed technique minimizes energy consumption and maintains required performance levels.", "title": "" }, { "docid": "9e766871b172f7a752c8af629bd10856", "text": "A fundamental computational limit on automated reasoning and its effect on Knowledge Representation is examined. Basically, the problem is that it can be more difficult to reason correctly ;Nith one representationallanguage than with another and, moreover, that this difficulty increases dramatically as the expressive power of the language increases. This leads to a tradeoff between the expressiveness of a representational language and its computational tractability. Here we show that this tradeoff can be seen to underlie the differences among a number of existing representational formalisms, in addition to motivating many of the current research issues in Knowledge Representation.", "title": "" }, { "docid": "532fa89af9499db8d4c50abcb17b633a", "text": "Our languages are in constant flux driven by external factors such as cultural, societal and technological changes, as well as by only partially understood internal motivations. Words acquire new meanings and lose old senses, new words are coined or borrowed from other languages and obsolete words slide into obscurity. Understanding the characteristics of shifts in the meaning and in the use of words is useful for those who work with the content of historical texts, the interested general public, but also in and of itself. The findings from automatic lexical semantic change detection, and the models of diachronic conceptual change are currently being incorporated in approaches for measuring document across-time similarity, information retrieval from long-term document archives, the design of OCR algorithms, and so on. In recent years we have seen a surge in interest in the academic community in computational methods and tools supporting inquiry into diachronic conceptual change and lexical replacement. This article is an extract of a survey of recent computational techniques to tackle lexical semantic change currently under review. In this article we focus on diachronic conceptual change as an extension of semantic change.", "title": "" }, { "docid": "ae167d6e1ff2b1ee3bd23e3e02800fab", "text": "The aim of this paper is to improve the classification performance based on the multiclass imbalanced datasets. In this paper, we introduce a new resampling approach based on Clustering with sampling for Multiclass Imbalanced classification using Ensemble (C-MIEN). C-MIEN uses the clustering approach to create a new training set for each cluster. The new training sets consist of the new label of instances with similar characteristics. This step is applied to reduce the number of classes then the complexity problem can be easily solved by C-MIEN. After that, we apply two resampling techniques (oversampling and undersampling) to rebalance the class distribution. Finally, the class distribution of each training set is balanced and ensemble approaches are used to combine the models obtained with the proposed method through majority vote. Moreover, we carefully design the experiments and analyze the behavior of C-MIEN with different parameters (imbalance ratio and number of classifiers). The experimental results show that C-MIEN achieved higher performance than state-of-the-art methods.", "title": "" }, { "docid": "2ac40255efaf4de4a389279196b35379", "text": "In this note, the concept of Vertical Slit Transistor Based Integrated Circuits (VeSTICs) is introduced and its feasibility discussed. VeSTICs paradigm has been conceived in response to the rapidly growing complexity/cost of the traditional bulk-CMOS-based approach and to challenges posed by the nano-scale era. This paradigm is based on strictly regular layouts. The central element of the proposed vision is a new junctionless Vertical Slit Field Effect Transistor (JL VeSFET) with twin independent gates. It is expected that VeSTICs will enable much denser, much easier to design, test and manufacture ICs, as well as, will be 3Dextendable and OPC-free.", "title": "" }, { "docid": "481018ae479f8a6b8669972156d234d6", "text": "AIM\nThis paper is a report of a discussion of the arguments surrounding the role of the initial literature review in grounded theory.\n\n\nBACKGROUND\nResearchers new to grounded theory may find themselves confused about the literature review, something we ourselves experienced, pointing to the need for clarity about use of the literature in grounded theory to help guide others about to embark on similar research journeys.\n\n\nDISCUSSION\nThe arguments for and against the use of a substantial topic-related initial literature review in a grounded theory study are discussed, giving examples from our own studies. The use of theoretically sampled literature and the necessity for reflexivity are also discussed. Reflexivity is viewed as the explicit quest to limit researcher effects on the data by awareness of self, something seen as integral both to the process of data collection and the constant comparison method essential to grounded theory.\n\n\nCONCLUSION\nA researcher who is close to the field may already be theoretically sensitized and familiar with the literature on the study topic. Use of literature or any other preknowledge should not prevent a grounded theory arising from the inductive-deductive interplay which is at the heart of this method. Reflexivity is needed to prevent prior knowledge distorting the researcher's perceptions of the data.", "title": "" }, { "docid": "27ffdb0d427d2e281ffe84e219e6ed72", "text": "UNLABELLED\nHitherto, noncarious cervical lesions (NCCLs) of teeth have been generally ascribed to either toothbrush-dentifrice abrasion or acid \"erosion.\" The last two decades have provided a plethora of new studies concerning such lesions. The most significant studies are reviewed and integrated into a practical approach to the understanding and designation of these lesions. A paradigm shift is suggested regarding use of the term \"biocorrosion\" to supplant \"erosion\" as it continues to be misused in the United States and many other countries of the world. Biocorrosion embraces the chemical, biochemical, and electrochemical degradation of tooth substance caused by endogenous and exogenous acids, proteolytic agents, as well as the piezoelectric effects only on dentin. Abfraction, representing the microstructural loss of tooth substance in areas of stress concentration, should not be used to designate all NCCLs because these lesions are commonly multifactorial in origin. Appropriate designation of a particular NCCL depends upon the interplay of the specific combination of three major mechanisms: stress, friction, and biocorrosion, unique to that individual case. Modifying factors, such as saliva, tongue action, and tooth form, composition, microstructure, mobility, and positional prominence are elucidated.\n\n\nCLINICAL SIGNIFICANCE\nBy performing a comprehensive medical and dental history, using precise terms and concepts, and utilizing the Revised Schema of Pathodynamic Mechanisms, the dentist may successfully identify and treat the etiology of root surface lesions. Preventive measures may be instituted if the causative factors are detected and their modifying factors are considered.", "title": "" }, { "docid": "713c7761ecba317bdcac451fcc60e13d", "text": "We describe a method for automatically transcribing guitar tablatures from audio signals in accordance with the player's proficiency for use as support for a guitar player's practice. The system estimates the multiple pitches in each time frame and the optimal fingering considering playability and player's proficiency. It combines a conventional multipitch estimation method with a basic dynamic programming method. The difficulty of the fingerings can be changed by tuning the parameter representing the relative weights of the acoustical reproducibility and the fingering easiness. Experiments conducted using synthesized guitar audio signals to evaluate the transcribed tablatures in terms of the multipitch estimation accuracy and fingering easiness demonstrated that the system can simplify the fingering with higher precision of multipitch estimation results than the conventional method.", "title": "" }, { "docid": "e347eadb8df6386e70171d73388b8ace", "text": "An ultra-large voltage conversion ratio converter is proposed by integrating a switched-capacitor circuit with a coupled inductor technology. The proposed converter can be seen as an equivalent parallel connection to the load of a basic boost converter and a number of forward converters, each one containing a switched-capacitor circuit. All the stages are activated by the boost switch. A single active switch is required, with no need of extreme duty-ratio values. The leakage energy of the coupled inductor is recycled to the load. The inrush current problem of switched capacitors is restrained by the leakage inductance of the coupled-inductor. The above features are the reason for the high efficiency performance. The operating principles and steady state analyses of continuous, discontinuous and boundary conduction modes are discussed in detail. To verify the performance of the proposed converter, a 200 W/20 V to 400 V prototype was implemented. The maximum measured efficiency is 96.4%. The full load efficiency is 95.1%.", "title": "" }, { "docid": "98b3f17de080aed8bce62e1c00f66605", "text": "While strong progress has been made in image captioning recently, machine and human captions are still quite distinct. This is primarily due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans – rightfully so – generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not explicitly considered in today's systems. To address these challenges, we change the training objective of the caption generator from reproducing ground-truth captions to generating a set of captions that is indistinguishable from human written captions. Instead of handcrafting such a learning target, we employ adversarial training in combination with an approximate Gumbel sampler to implicitly match the generated distribution to the human one. While our method achieves comparable performance to the state-of-the-art in terms of the correctness of the captions, we generate a set of diverse captions that are significantly less biased and better match the global uni-, bi- and tri-gram distributions of the human captions.", "title": "" }, { "docid": "620652a31904be950376332c7f97304d", "text": "We combine two of the most popular approaches to automated Grammatical Error Correction (GEC): GEC based on Statistical Machine Translation (SMT) and GEC based on Neural Machine Translation (NMT). The hybrid system achieves new state-of-the-art results on the CoNLL-2014 and JFLEG benchmarks. This GEC system preserves the accuracy of SMT output and, at the same time, generates more fluent sentences as it typical for NMT. Our analysis shows that the created systems are closer to reaching human-level performance than any other GEC system reported so far.", "title": "" }, { "docid": "6f1fc6a07d0beb235f5279e17a46447f", "text": "Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach.", "title": "" }, { "docid": "ee732b213767471c29f12e7d00f4ded3", "text": "The increasing interest in scene text reading in multilingual environments raises the need to recognize and distinguish between different writing systems. In this paper, we propose a novel method for script identification in scene text using triplets of local convolutional features in combination with the traditional bag-of-visual-words model. Feature triplets are created by making combinations of descriptors extracted from local patches of the input images using a convolutional neural network. This approach allows us to generate a more descriptive codeword dictionary for the bag-of-visual-words model, as the low discriminative power of weak descriptors is enhanced by other descriptors in a triplet. The proposed method is evaluated on two public benchmark datasets for scene text script identification and a public dataset for script identification in video captions. The experiments demonstrate that our method outperforms the baseline and yields competitive results on all three datasets.", "title": "" }, { "docid": "76941f2eb0d881f7c66ba6f38dac9637", "text": "Scientific papers revolve around citations, and for many discourse level tasks one needs to know whose work is being talked about at any point in the discourse. In this paper, we introduce the scientific attribution task, which links different linguistic expressions to citations. We discuss the suitability of different evaluation metrics and evaluate our classification approach to deciding attribution both intrinsically and in an extrinsic evaluation where information about scientific attribution is shown to improve performance on Argumentative Zoning, a rhetorical classification task.", "title": "" }, { "docid": "bd882f762be5a9cb67191a7092fc88e3", "text": "This study tested the criterion validity of the inventory, Mental Toughness 48, by assessing the correlation between mental toughness and physical endurance for 41 male undergraduate sports students. A significant correlation of .34 was found between scores for overall mental toughness and the time a relative weight could be held suspended. Results support the criterion-related validity of the Mental Toughness 48.", "title": "" }, { "docid": "6cc99565a0e9081a94e82be93a67482e", "text": "The existing shortage of therapists and caregivers assisting physically disabled individuals at home is expected to increase and become serious problem in the near future. The patient population needing physical rehabilitation of the upper extremity is also constantly increasing. Robotic devices have the potential to address this problem as noted by the results of recent research studies. However, the availability of these devices in clinical settings is limited, leaving plenty of room for improvement. The purpose of this paper is to document a review of robotic devices for upper limb rehabilitation including those in developing phase in order to provide a comprehensive reference about existing solutions and facilitate the development of new and improved devices. In particular the following issues are discussed: application field, target group, type of assistance, mechanical design, control strategy and clinical evaluation. This paper also includes a comprehensive, tabulated comparison of technical solutions implemented in various systems.", "title": "" }, { "docid": "1ad1690ff359462acb320edb42ac821e", "text": "Green marketing subsumes greening products as well as greening firms. In addition to manipulating the 4Ps (product, price, place and promotion) of the traditional marketing mix, it requires a careful understanding of public policy processes. This paper focuses primarily on promoting products by employing claims about their environmental attributes or about firms that manufacture and/or sell them. Secondarily, it focuses on product and pricing issues. Drawing on multiple literatures, it examines issues such as what needs to be greened (products, systems or processes), why consumers purchase/do not purchase green products and how firms should think about information disclosure strategies on environmental claims. Copyright  2002 John Wiley & Sons, Ltd and ERP Environment.", "title": "" }, { "docid": "bdf9fa05c3c2e14dabb37dc380ae30fd", "text": "The decomposition of a music audio signal into its vocal and backing track components is analogous to image-toimage translation, where a mixed spectrogram is transformed into its constituent sources. We propose a novel application of the U-Net architecture — initially developed for medical imaging — for the task of source separation, given its proven capacity for recreating the fine, low-level detail required for high-quality audio reproduction. Through both quantitative evaluation and subjective assessment, experiments demonstrate that the proposed algorithm achieves state-of-the-art performance.", "title": "" }, { "docid": "93d40aa40a32edab611b6e8c4a652dbb", "text": "In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4% mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2% mIoU on the same dataset. DVSNet is also able to reduce up to 95% of the computational workloads.", "title": "" } ]
scidocsrr
4192812a23215575a6aebd9f5fd64130
ADVANTAGE-WEIGHTED INFORMATION MAXIMIZA-
[ { "docid": "de018dc74dd255cf54d9c5597a1f9f73", "text": "Smoothness regularization is a popular method to decrease generalization error. We propose a novel regularization technique that rewards local distributional smoothness (LDS), a KLdistance based measure of the model’s robustness against perturbation. The LDS is defined in terms of the direction to which the model distribution is most sensitive in the input space. We call the training with LDS regularization virtual adversarial training (VAT). VAT resembles the adversarial training (Goodfellow et al., 2015), but distinguishes itself in that it determines the adversarial direction from the model distribution alone, and does not use the label information. The technique is therefore applicable even to semi-supervised learning. When we applied our technique to the classification task of the permutation invariant MNIST dataset, it not only eclipsed all the models that are not dependent on generative models and pre-training, but also performed well even in comparison to the state of the art method (Rasmus et al., 2015) that uses a highly advanced generative model.", "title": "" }, { "docid": "503af27bc7de93815010aefbae4a20ed", "text": "This work shows that policies with simple linear and RBF parameterizations can be trained to solve a variety of widely studied continuous control tasks, including the gym-v1 benchmarks. The performance of these trained policies are competitive with state of the art results, obtained with more elaborate parameterizations such as fully connected neural networks. Furthermore, the standard training and testing scenarios for these tasks are shown to be very limited and prone to over-fitting, thus giving rise to only trajectory-centric policies. Training with a diverse initial state distribution induces more global policies with better generalization. This allows for interactive control scenarios where the system recovers from large on-line perturbations; as shown in the supplementary video.", "title": "" }, { "docid": "ddae1c6469769c2c7e683bfbc223ad1a", "text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.", "title": "" } ]
[ { "docid": "4f6f441129aa47b09984f893b910035f", "text": "Hydroxycinnamic acids (such as ferulic, caffeic, sinapic, and p-coumaric acids) are a group of compounds highly abundant in food that may account for about one-third of the phenolic compounds in our diet. Hydroxycinnamic acids have gained an increasing interest in health because they are known to be potent antioxidants. These compounds have been described as chain-breaking antioxidants acting through radical scavenging activity, that is related to their hydrogen or electron donating capacity and to the ability to delocalize/stabilize the resulting phenoxyl radical within their structure. The free radical scavenger ability of antioxidants can be predicted from standard one-electron potentials. Thus, voltammetric methods have often been applied to characterize a diversity of natural and synthetic antioxidants essentially to get an insight into their mechanism and also as an important tool for the rational design of new and potent antioxidants. The structure-property-activity relationships (SPARs) correlations already established for this type of compounds suggest that redox potentials could be considered a good measure of antioxidant activity and an accurate guideline on the drug discovery and development process. Due to its magnitude in the antioxidant field, the electrochemistry of hydroxycinnamic acid-based antioxidants is reviewed highlighting the structure-property-activity relationships (SPARs) obtained so far.", "title": "" }, { "docid": "af3297de35d49f774e2f31f31b09fd61", "text": "This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order to predict the future nature of higher education in a world where artificial intelligence is part of the fabric of our universities. We pinpoint some challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.", "title": "" }, { "docid": "75821b0aaf9c35490858d2f17d8fcb3e", "text": "Heretofore the concept of \" blockchain \" has not been precisely defined. Accordingly the potential useful applications of this technology have been largely inflated. This work sidesteps the question of what constitutes a blockchain as such and focuses on the architectural components of the Bitcoin cryptocurrency, insofar as possible, in isolation. We consider common problems inherent in the design of effective supply chain management systems. With each identified problem we propose a solution that utilizes one or more component aspects of Bitcoin. This culminates in five design principles for increased efficiency in supply chain management systems through the application of incentive mechanisms and data structures native to the Bitcoin cryptocurrency protocol.", "title": "" }, { "docid": "52b481885dc7ad62dc4e8b3e31b9e71e", "text": "In this paper, we propose a novel deep learning based video sa li ncy prediction method, named DeepVS. Specifically, we establ i h a large-scale eye-tracking database of videos (LEDOV), which includes 32 ubjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, w hich is composed of the objectness and motion subnets. In OM-CNN, cross-net m ask and hierarchical feature normalization are proposed to combine the sp atial features of the objectness subnet and the temporal features of the motion su b et. We further find from our database that there exists a temporal correlati on of human attention with a smooth saliency transition across video frames. We th us propose saliencystructured convolutional long short-term memory (SS-Conv LSTM) network, using the extracted features from OM-CNN as the input. Consequ ently, the interframe saliency maps of a video can be generated, which consid er both structured output with center-bias and cross-frame transitions of hum an attention maps. Finally, the experimental results show that DeepVS advances t he tate-of-the-art in video saliency prediction.", "title": "" }, { "docid": "cc6267d02ecbb1d2679ac30ee5b56d82", "text": "We established polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) and diagnostic PCR based on cytochrome C oxidase subunit I (COI) barcodes of Bungarus multicinctus, genuine Jinqian Baihua She (JBS), and adulterant snake species. The PCR-RFLP system utilizes the specific restriction sites of SpeI and BstEII in the COI sequence of B. multicinctus to allow its cleavage into 3 fragments (120 bp, 230 bp, and 340 bp); the COI sequences of the adulterants do not contain these restriction sites and therefore remained intact after digestion with SpeI and BstEII (except for that of Zaocys dhumnades, which could be cleaved into a 120 bp and a 570 bp fragment). For diagnostic PCR, a pair of species-specific primers (COI37 and COI337) was designed to amplify a specific 300 bp amplicon from the genomic DNA of B. multicinctus; no such amplicons were found in other allied species. We tested the two methods using 11 commercial JBS samples, and the results demonstrated that barcode-based PCR-RFLP and diagnostic PCR both allowed effective and accurate authentication of JBS.", "title": "" }, { "docid": "c3f7f9b70763c012698cad8295e50f2c", "text": "Recommender systems are widely used in many areas, especially in e-commerce. Recently, they are also applied in e-learning tasks such as recommending resources (e.g. papers, books,..) to the learners (students). In this work, we propose a novel approach which uses recommender system techniques for educational data mining, especially for predicting student performance. To validate this approach, we compare recommender system techniques with traditional regression methods such as logistic/linear regression by using educational data for intelligent tutoring systems. Experimental results show that the proposed approach can improve prediction results.", "title": "" }, { "docid": "5ce82b8c2cc87ae84026d230f3a97e06", "text": "This paper presents a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions. The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets. As our results show, the method can generate different hair types with a very few input parameters, and perform virtual hairdressing operations such as wetting, cutting and drying hair.", "title": "" }, { "docid": "9c01e0dff555a29cc3ffdcab1e861994", "text": "To determine whether there is any new anatomical structure present within the labia majora. A case serial study was executed on eleven consecutive fresh human female cadavers. Stratum-by-stratum dissections of the labia majora were performed. Twenty-two anatomic dissections of labia majora were completed. Eosin and Hematoxylin agents were used to stain newly discovered adipose sac’s tissues of the labia majora and the cylinder-like structures, which cover condensed adipose tissues. The histology of these two structures was compared. All dissected labia majora demonstrated the presence of the anatomic existence of the adipose sac structure. Just under the dermis of the labia majora, the adipose sac was located, which was filled with lobules containing condensed fatty tissues in the form of cylinders. The histological investigation established that the well-organized fibro-connective-adipose tissues represented the adipose sac. The absence of descriptions of the adipose sac within the labia majora in traditional anatomic and gynecologic textbooks was noted. In this study group, the newly discovered adipose sac is consistently present within the anatomical structure of the labia majora. The well-organized fibro-connective-adipose tissue represents microscopic characteristic features of the adipose sac.", "title": "" }, { "docid": "3a6a97b2705d90b031ab1e065281465b", "text": "Common (Cinnamomum verum, C. zeylanicum) and cassia (C. aromaticum) cinnamon have a long history of use as spices and flavouring agents. A number of pharmacological and clinical effects have been observed with their use. The objective of this study was to systematically review the scientific literature for preclinical and clinical evidence of safety, efficacy, and pharmacological activity of common and cassia cinnamon. Using the principles of evidence-based practice, we searched 9 electronic databases and compiled data according to the grade of evidence found. One pharmacological study on antioxidant activity and 7 clinical studies on various medical conditions were reported in the scientific literature including type 2 diabetes (3), Helicobacter pylori infection (1), activation of olfactory cortex of the brain (1), oral candidiasis in HIV (1), and chronic salmonellosis (1). Two of 3 randomized clinical trials on type 2 diabetes provided strong scientific evidence that cassia cinnamon demonstrates a therapeutic effect in reducing fasting blood glucose by 10.3%–29%; the third clinical trial did not observe this effect. Cassia cinnamon, however, did not have an effect at lowering glycosylated hemoglobin (HbA1c). One randomized clinical trial reported that cassia cinnamon lowered total cholesterol, low-density lipoprotein cholesterol, and triglycerides; the other 2 trials, however, did not observe this effect. There was good scientific evidence that a species of cinnamon was not effective at eradicating H. pylori infection. Common cinnamon showed weak to very weak evidence of efficacy in treating oral candidiasis in HIV patients and chronic", "title": "" }, { "docid": "a271371ba28be10b67e31ecca6f3aa88", "text": "The toxicity and repellency of the bioactive chemicals of clove (Syzygium aromaticum) powder, eugenol, eugenol acetate, and beta-caryophyllene were evaluated against workers of the red imported fire ant, Solenopsis invicta Buren. Clove powder applied at 3 and 12 mg/cm2 provided 100% ant mortality within 6 h, and repelled 99% within 3 h. Eugenol was the fastest acting compound against red imported fire ant compared with eugenol acetate, beta-caryophyllene, and clove oil. The LT50 values inclined exponentially with the increase in the application rate of the chemical compounds tested. However, repellency did not increase with the increase in the application rate of the chemical compounds tested, but did with the increase in exposure time. Eugenol, eugenol acetate, as well as beta-caryophyllene and clove oil may provide another tool for red imported fire ant integrated pest management, particularly in situations where conventional insecticides are inappropriate.", "title": "" }, { "docid": "41e71a03c2abdd0fec78e8273709efa7", "text": "Logical correction of aging contour changes of the face is based on understanding its structure and the processes involved in the aging appearance. Aging changes are seen at all tissue levels between the skin and bone although the relative contribution of each component to the overall change of facial appearance has yet to be satisfactorily determined. Significantly, the facial skeleton changes profoundly with aging as a consequence of significant resorption of the bones of dental origin in particular. The resultant loss of skeletal projection gives the visual impression of descent while the reduced ligamentous support leads to laxity of the overlying soft tissues. Understanding the specific changes of the face with aging is fundamental to achieving optimum correction and safe use of injectables for facial rejuvenation.", "title": "" }, { "docid": "8014c32fa820e1e2c54e1004b62dc33e", "text": "Signature-based malicious code detection is the standard technique in all commercial anti-virus software. This method can detect a virus only after the virus has appeared and caused damage. Signature-based detection performs poorly whe n attempting to identify new viruses. Motivated by the standard signature-based technique for detecting viruses, and a recent successful text classification method, n-grams analysis, we explo re the idea of automatically detecting new malicious code. We employ n-grams analysis to automatically generate signatures from malicious and benign software collections. The n-gramsbased signatures are capable of classifying unseen benign and malicious code. The datasets used are large compared to earlier applications of n-grams analysis.", "title": "" }, { "docid": "2b91abf4b2a12c852fc78eb40b0b22ba", "text": "Interdisciplinary research broadens the view of particular problems yielding fresh and possibly unexpected insights. This is the case of neuromorphic engineering where technology and neuroscience cross-fertilize each other. For example, consider on one side the recently discovered memristor, postulated in 1971, thanks to research in nanotechnology electronics. On the other side, consider the mechanism known as Spike-TimeDependent-Plasticity (STDP) which describes a neuronal synaptic learning mechanism that outperforms the traditional Hebbian synaptic plasticity proposed in 1949. STDP was originally postulated as a computer learning algorithm, and is being used by the machine intelligence and computational neuroscience community. At the same time its biological and physiological foundations have been reasonably well established during the past decade. If memristance and STDP can be related, then (a) recent discoveries in nanophysics and nanoelectronic principles may shed new lights into understanding the intricate molecular and physiological mechanisms behind STDP in neuroscience, and (b) new neuromorphic-like computers built out of nanotechnology memristive devices could incorporate the biological STDP mechanisms yielding a new generation of self-adaptive ultrahigh-dense intelligent machines. Here we show that by combining memristance models with the electrical wave signals of neural impulses (spikes) converging from preand post-synaptic neurons into a synaptic junction, STDP behavior emerges naturally. This result serves to understand how neural and memristance parameters modulate STDP, which might bring new insights to neurophysiologists in searching for the ultimate physiological mechanisms responsible for STDP in biological synapses. At the same time, this result also provides a direct mean to incorporate STDP learning mechanisms into a new generation of nanotechnology computers employing memristors. Memristance was postulated in 1971 by Chua based on circuit theoretical reasonings and has been recently demonstrated in nanoscale two-terminal devices, such as certain titanium-dioxide and amorphous Silicon cross-point switches. Memristance arises naturally in nanoscale devices because small voltages can yield enormous electric fields that produce the motion of charged atomic or molecular species changing structural properties of a device (such as its conductance) while it operates. By definition a memristor obeys equations of the form", "title": "" }, { "docid": "cf77d802b84093a2b2bc666bf0c5665e", "text": "Past research has reported that females use exclamation points more frequently than do males. Such research often characterizes exclamation points as ‘‘markers of excitability,’’ a term that suggests instability and emotional randomness, yet it has not necessarily examined the contexts in which exclamation points appeared for evidence of ‘‘excitability.’’ The present study uses a 16-category coding frame in a content analysis of 200 exclamations posted to two electronic discussion groups serving the library and information science profession. The results indicate that exclamation points rarely function as markers of excitability in these professional forums, but may function as markers of friendly interaction, a finding with implications for understanding gender styles in email and other forms of computer-mediated communication.", "title": "" }, { "docid": "fa086058ad67602b9b4429f950e70c0f", "text": "The Telecare Medicine Information System (TMIS) has brought us a lot of conveniences. However, it may also reveal patients’ privacies and other important information. So the security of TMIS can be paid much attention to, in which identity authentication plays a very important role in protecting TMIS from being illegally used. To improve the situation, TMIS needs a more secure and more efficient authentication scheme. Recently, Yan and Li et al. have proposed a secure authentication scheme for the TMIS based on biometrics, claiming that it can withstand various attacks. In this paper, we present several security problems in their scheme as follows: (a) it cannot really achieve three-factor authentication; (b) it has design flaws at the password change phase; (c) users’ biometric may be locked out; (d) it fails to achieve users’ anonymous identity. To solve these problems, a new scheme using the theory of Secure Sketch is proposed. The thorough analysis shows that our scheme can provide a stronger security than Yan-Li’s protocol, despite the little higher computation cost at client. What’s more, the proposed scheme not only can achieve anonymity preserving but also can achieve session key agreement.", "title": "" }, { "docid": "defde14c64f5eecda83cf2a59c896bc0", "text": "Time series shapelets are discriminative subsequences and their similarity to a time series can be used for time series classification. Since the discovery of time series shapelets is costly in terms of time, the applicability on long or multivariate time series is difficult. In this work we propose Ultra-Fast Shapelets that uses a number of random shapelets. It is shown that Ultra-Fast Shapelets yield the same prediction quality as current state-of-theart shapelet-based time series classifiers that carefully select the shapelets by being by up to three orders of magnitudes. Since this method allows a ultra-fast shapelet discovery, using shapelets for long multivariate time series classification becomes feasible. A method for using shapelets for multivariate time series is proposed and Ultra-Fast Shapelets is proven to be successful in comparison to state-of-the-art multivariate time series classifiers on 15 multivariate time series datasets from various domains. Finally, time series derivatives that have proven to be useful for other time series classifiers are investigated for the shapelet-based classifiers. It is shown that they have a positive impact and that they are easy to integrate with a simple preprocessing step, without the need of adapting the shapelet discovery algorithm.", "title": "" }, { "docid": "a0e7712da82a338fda01e1fd0bb4a44e", "text": "Compliance specifications concisely describe selected aspects of what a business operation should adhere to. To enable automated techniques for compliance checking, it is important that these requirements are specified correctly and precisely, describing exactly the behavior intended. Although there are rigorous mathematical formalisms for representing compliance rules, these are often perceived to be difficult to use for business users. Regardless of notation, however, there are often subtle but important details in compliance requirements that need to be considered. The main challenge in compliance checking is to bridge the gap between informal description and a precise specification of all requirements. In this paper, we present an approach which aims to facilitate creating and understanding formal compliance requirements by providing configurable templates that capture these details as options for commonly-required compliance requirements. These options are configured interactively with end-users, using question trees and natural language. The approach is implemented in the Process Mining Toolkit ProM.", "title": "" }, { "docid": "4551c05bbf8969d310d548d5a773f584", "text": "Optical testing of advanced CMOS circuits successfully exploits the near-infrared photon emission by hot-carriers in transistor channels (see EMMI (Ng et al., 1999) and PICA (Kash and Tsang, 1997) (Song et al., 2005) techniques). However, due to the continuous scaling of features size and supply voltage, spontaneous emission is becoming fainter and optical circuit diagnostics becomes more challenging. Here we present the experimental characterization of hot-carrier luminescence emitted by transistors in four CMOS technologies from two different manufacturers. Aim of the research is to gain a better perspective on emission trends and dependences on technological parameters. In particular, we identify luminescence changes due to short-channel effects (SCE) and we ascertain that, for each technology node, there are two operating regions, for short- and long-channels. We highlight the emission reduction of p-FETs compared to n-FETs, due to a \"red-shift\" (lower energy) of the hot-carrier distribution. Eventually, we give perspectives about emission trends in actual and future technology nodes, showing that luminescence dramatically decreases with voltage, but it recovers strength when moving from older to more advanced technology generations. Such results extend the applicability of optical testing techniques, based on present single-photon detectors, to future low-voltage chips", "title": "" }, { "docid": "191058192146249d5cf9493eb41a37c2", "text": "Cryptocurrency networks have given birth to a diversity of start-ups and attracted a huge influx of venture capital to invest in these start-ups for creating and capturing value within and between such networks. Synthesizing strategic management and information systems (IS) literature, this study advances a unified theoretical framework for identifying and investigating how cryptocurrency companies configure value through digital business models. This framework is then employed, via multiple case studies, to examine digital business models of companies within the bitcoin network. Findings suggest that companies within the bitcoin network exhibits six generic digital business models. These six digital business models are in turn driven by three modes of value configurations with their own distinct logic for value creation and mechanisms for value capturing. A key finding of this study is that value-chain and value-network driven business models commercialize their products and services for each value unit transfer, whereas commercialization for value-shop driven business models is realized through the subsidization of direct users by revenue generating entities. This study contributes to extant literature on value configurations and digital businesses models within the emerging and increasingly pervasive domain of cryptocurrency networks.", "title": "" }, { "docid": "1b27922ab1693a15d230301c3a868afd", "text": "Model based iterative reconstruction (MBIR) algorithms for low-dose X-ray CT are computationally complex because of the repeated use of the forward and backward projection. Inspired by this success of deep learning in computer vision applications, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the texture are not fully recovered, which was unfamiliar to the radiologists. To cope with this problem, here we propose a direct residual learning approach on directional wavelet domain to solve this problem and to improve the performance against previous work. In particular, the new network estimates the noise of each input wavelet transform, and then the de-noised wavelet coefficients are obtained by subtracting the noise from the input wavelet transform bands. The experimental results confirm that the proposed network has significantly improved performance, preserving the detail texture of the original images.", "title": "" } ]
scidocsrr
7628e6f39c6b027b74ac52746b2df581
Critical Success Factors of Enterprise Resource Planning Systems Implementation Success in China
[ { "docid": "fe8398493a04c367b089b175711984d7", "text": "E RP software packages that manage and integrate business processes across organizational functions and locations cost millions of dollars to buy, several times as much to implement, and necessitate disruptive organizational change. While some companies have enjoyed significant gains, others have had to scale back their projects and accept minimal benefits, or even abandon implementation of ERP projects [4]. Historically, a common problem when adopting package software has been the issue of “misfits,” that is, the gaps between the functionality offered by the package and that required by the adopting organization [1, 3]. As a result, organizations have had to choose among adapting to the new functionality, living with the shortfall, instituting workarounds, or customizing the package. ERP software, as a class of package software, also presents this problematic choice to organizations. The problem is exacerbated because ERP implementation is more complex due to cross-module integration, data standardization, adoption of the underlying business model (“best practices”), compressed implementation schedule, and the involvement of a large number of stakeholders. The knowledge gap among implementation personnel is usually significant. Few organizational users underChristina Soh, Sia Siew Kien, and Joanne Tay-Yap", "title": "" } ]
[ { "docid": "f169f42bcdbaf79e7efa9b1066b86523", "text": "Logic and Philosophy of Science Research Group, Hokkaido University, Japan Jan 7, 2015 Abstract In this paper we provide an analysis and overview of some notable definitions, works and thoughts concerning discrete physics (digital philosophy) that mainly suggest a finite and discrete characteristic for the physical world, as well as, of the cellular automaton, which could serve as the basis of a (or the only) perfect mathematical deterministic model for the physical reality.", "title": "" }, { "docid": "db4bd26b63c70fd42109cc605f2ef44b", "text": "The most commonly used statistical models of civil war onset fail to correctly predict most occurrences of this rare event in out-of-sample data. Statistical methods for the analysis of binary data, such as logistic regression, even in their rare event and regularized forms, perform poorly at prediction. We compare the performance of Random Forests with three versions of logistic regression (classic logistic regression, Firth rare events logistic regression, and L1-regularized logistic regression), and find that the algorithmic approach provides significantly more accurate predictions of civil war onset in out-of-sample data than any of the logistic regression models. The article discusses these results and the ways in which algorithmic statistical methods like Random Forests can be useful to more accurately predict rare events in conflict data.", "title": "" }, { "docid": "0e218dd5654ae9125d40bdd5c0a326d6", "text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.", "title": "" }, { "docid": "4664078796689bb276f7710c1cfd56b6", "text": "College of Computer and Information Technology, China Three Gorges University, Yichang Hubei, 443000, China wannjy@163.com Abstract In the family of Learning Classifier Systems, the classifier system XCS has been successfully used for many applications. However, the standard XCS has no memory mechanism and can only learn optimal policy in Markov environments, where the optimal action is determined solely by the state of current sensory input. In practice, most environments are partially observable environments on agent’s sensation, which are also known as non-Markov environments. Within these environments, XCS either fails, or only develops a suboptimal policy, since it has no memory. In this work, we develop a new classifier system based on XCS to tackle this problem. It adds an internal message list to XCS as the memory list to record input sensation history, and extends a small number of classifiers with memory conditions. The classifier’s memory condition, as a foothold to disambiguate non-Markov states, is used to sense a specified element in the memory list. Besides, a detection method is employed to recognize non-Markov states in environments, to avoid these states controlling over classifiers’ memory conditions. Furthermore, four sets of different complex maze environments have been tested by the proposed method. Experimental results show that our system is one of the best techniques to solve partially observable environments, compared with some well-known classifier systems proposed for these environments.", "title": "" }, { "docid": "e3374d5fd1abf4f747a05341d4d04ec6", "text": "The written medium through which we commonly learn about relevant news are news articles. Since there is an abundance of news articles that are written daily, the readers have a common problem of discovering the content of interest and still not be overwhelmed with the amount of it. In this paper we present a system called Event Registry which is able to group articles about an event across languages and extract from the articles core event information in a structured form. In this way, the amount of content that the reader has to check is significantly reduced while additionally providing the reader with a global coverage of each event. Since all event information is structured this also provides extensive and fine-grained options for information searching and filtering that are not available with current news aggregators.", "title": "" }, { "docid": "cd64fdc5cee4d603e6e7335e8d9c4956", "text": "An integrated triple-band GSM antenna switch module, fabricated in RF CMOS on a sapphire substrate, is presented in this paper. The low cost and compact size requirements in wireless and mobile communication systems motivate the continuing integration of the analog portions of the design. The antenna switch die incorporates a FET switch, transmit path filters, and all bias and control circuitry on the same substrate using a 0.5 /spl mu/m CMOS process. A revised version of the die is also proposed, which makes use of an additional copper interconnect layer to reduce die area.", "title": "" }, { "docid": "d0b29493c64e787ed88ad8166d691c3d", "text": "Mobile apps have to satisfy various privacy requirements. Notably, app publishers are often obligated to provide a privacy policy and notify users of their apps’ privacy practices. But how can a user tell whether an app behaves as its policy promises? In this study we introduce a scalable system to help analyze and predict Android apps’ compliance with privacy requirements. We discuss how we customized our system in a collaboration with the California Office of the Attorney General. Beyond its use by regulators and activists our system is also meant to assist app publishers and app store owners in their internal assessments of privacy requirement compliance. Our analysis of 17,991 free Android apps shows the viability of combining machine learning-based privacy policy analysis with static code analysis of apps. Results suggest that 71% of apps tha lack a privacy policy should have one. Also, for 9,050 apps that have a policy, we find many instances of potential inconsistencies between what the app policy seems to state and what the code of the app appears to do. In particular, as many as 41% of these apps could be collecting location information and 17% could be sharing such with third parties without disclosing so in their policies. Overall, each app exhibits a mean of 1.83 potential privacy requirement inconsistencies.", "title": "" }, { "docid": "dd0a1a3d6de377efc0a97004376749b6", "text": "Time series often have a temporal hierarchy, with information that is spread out over multiple time scales. Common recurrent neural networks, however, do not explicitly accommodate such a hierarchy, and most research on them has been focusing on training algorithms rather than on their basic architecture. In this paper we study the effect of a hierarchy of recurrent neural networks on processing time series. Here, each layer is a recurrent network which receives the hidden state of the previous layer as input. This architecture allows us to perform hierarchical processing on difficult temporal tasks, and more naturally capture the structure of time series. We show that they reach state-of-the-art performance for recurrent networks in character-level language modeling when trained with simple stochastic gradient descent. We also offer an analysis of the different emergent time scales.", "title": "" }, { "docid": "bb98b9a825a4c7d0f3d4b06fafb8ff37", "text": "The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.", "title": "" }, { "docid": "99b6873a9f3fd01ecfd4ba141df21f12", "text": "This paper shows how a rational Bitcoin miner should select transactions from his node’s mempool, when creating a new block, in order to maximize his profit in the absence of a block size limit. To show this, the paper introduces the block space supply curve and the mempool demand curve. The former describes the cost for a miner to supply block space by accounting for orphaning risk. The latter represents the fees offered by the transactions in mempool, and is expressed versus the minimum block size required to claim a given portion of the fees. The paper explains how the supply and demand curves from classical economics are related to the derivatives of these two curves, and proves that producing the quantity of block space indicated by their intersection point maximizes the miner’s profit. The paper then shows that an unhealthy fee market—where miners are incentivized to produce arbitrarily large blocks—cannot exist since it requires communicating information at an arbitrarily fast rate. The paper concludes by considering the conditions under which a rational miner would produce big, small or empty blocks, and by estimating the cost of a spam attack.", "title": "" }, { "docid": "b6303ae2b77ac5c187694d5320ef65ff", "text": "Mechanisms for continuously changing or shifting a system's attack surface are emerging as game-changers in cyber security. In this paper, we propose a novel defense mechanism for protecting the identity of nodes in Mobile Ad Hoc Networks and defeat the attacker's reconnaissance efforts. The proposed mechanism turns a classical attack mechanism - Sybil - into an effective defense mechanism, with legitimate nodes periodically changing their virtual identity in order to increase the uncertainty for the attacker. To preserve communication among legitimate nodes, we modify the network layer by introducing (i) a translation service for mapping virtual identities to real identities; (ii) a protocol for propagating updates of a node's virtual identity to all legitimate nodes; and (iii) a mechanism for legitimate nodes to securely join the network. We show that the proposed approach is robust to different types of attacks, and also show that the overhead introduced by the update protocol can be controlled by tuning the update frequency.", "title": "" }, { "docid": "e22a3cd1887d905fffad0f9d14132ed6", "text": "Relativistic electron beam generation studies have been carried out in LIA-400 system through explosive electron emission for various cathode materials. This paper presents the emission properties of different cathode materials at peak diode voltages varying from 10 to 220 kV and at peak current levels from 0.5 to 2.2 kA in a single pulse duration of 160-180 ns. The cathode materials used are graphite, stainless steel, and red polymer velvet. The perveance data calculated from experimental waveforms are compared with 1-D Child Langmuir formula to obtain the cathode plasma expansion velocity for various cathode materials. Various diode parameters are subject to shot to shot variation analysis. Velvet cathode proves to be the best electron emitter because of its lower plasma expansion velocity and least shot to shot variability.", "title": "" }, { "docid": "5ced8b93ad1fb80bb0c5324d34af9269", "text": "This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.", "title": "" }, { "docid": "d84bd9aecd5e5a5b744bbdbffddfd65f", "text": "Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver. However, the index construction of these variables could result in their strong correlation, thus preventing rated characters from being plotted accurately. Phase 1 of this study tested the indices of the Godspeed questionnaire as measures of humanlike characters. The results indicate significant and strong correlations among the relevant indices (Bartneck, Kulić, Croft, & Zoghbi, 2009). Phase 2 of this study developed alternative indices with nonsignificant correlations (p > .05) between the proposed y-axis eeriness and x-axis perceived humanness (r = .02). The new humanness and eeriness indices facilitate plotting relations among rated characters of varying human likeness. 2010 Elsevier Ltd. All rights reserved. 1. Plotting emotional responses to humanlike characters Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver (Fig. 1). The graph predicts that more human-looking characters will be perceived as more agreeable up to a point at which they become so human people find their nonhuman imperfections unsettling (MacDorman, Green, Ho, & Koch, 2009; MacDorman & Ishiguro, 2006; Mori, 1970). This dip in appraisal marks the start of the uncanny valley (bukimi no tani in Japanese). As characters near complete human likeness, they rise out of the valley, and people once again feel at ease with them. In essence, a character’s imperfections expose a mismatch between the human qualities that are expected and the nonhuman qualities that instead follow, or vice versa. As an example of things that lie in the uncanny valley, Mori (1970) cites corpses, zombies, mannequins coming to life, and lifelike prosthetic hands. Assuming the uncanny valley exists, what dependent variable is appropriate to represent Mori’s graph? Mori referred to the y-axis as shinwakan, a neologism even in Japanese, which has been variously translated as familiarity, rapport, and comfort level. Bartneck, Kanda, Ishiguro, and Hagita (2009) have proposed using likeability to represent shinwakan, and they applied a likeability index to the evaluation of interactions with Ishiguro’s android double, the Geminoid HI-1. Likeability is virtually synonymous with interpersonal warmth (Asch, 1946; Fiske, Cuddy, & Glick, 2007; Rosenberg, Nelson, & Vivekananthan, 1968), which is also strongly correlated with other important measures, such as comfortability, communality, sociability, and positive (vs. negative) affect (Abele & Wojciszke, 2007; MacDorman, Ough, & Ho, 2007; Mehrabian & Russell, 1974; Sproull, Subramani, Kiesler, Walker, & Waters, 1996; Wojciszke, Abele, & Baryla, 2009). Warmth is the primary dimension of human social perception, accounting for 53% of the variance in perceptions of everyday social behaviors (Fiske, Cuddy, Glick, & Xu, 2002; Fiske et al., 2007; Wojciszke, Bazinska, & Jaworski, 1998). Despite the importance of warmth, this concept misses the essence of the uncanny valley. Mori (1970) refers to negative shinwakan as bukimi, which translates as eeriness. However, eeriness is not the negative anchor of warmth. A person can be cold and disagreeable without being eerie—at least not eerie in the way that an artificial human being is eerie. In addition, the set of negative emotions that predict eeriness (e.g., fear, anxiety, and disgust) are more specific than coldness (Ho, MacDorman, & Pramono, 2008). Thus, shinwakan and bukimi appear to constitute distinct dimensions. Although much has been written on potential benchmarks for anthropomorphic robots (for reviews see Kahn et al., 2007; MacDorman & Cowley, 2006; MacDorman & Kahn, 2007), no indices have been developed and empirically validated for measuring shinwakan or related concepts across a range of humanlike stimuli, such as computer-animated human characters and humanoid robots. The Godspeed questionnaire, compiled by Bartneck, Kulić, Croft, and Zoghbi (2009), includes at least two concepts, anthropomorphism and likeability, that could potentially serve as the xand y-axes of Mori’s graph (Bartneck, Kanda, et al., 2009). Although the 0747-5632/$ see front matter 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2010.05.015 * Corresponding author. Tel.: +1 317 215 7040. E-mail address: kmacdorm@indiana.edu (K.F. MacDorman). URL: http://www.macdorman.com (K.F. MacDorman). Computers in Human Behavior 26 (2010) 1508–1518", "title": "" }, { "docid": "4ca4ccd53064c7a9189fef3e801612a0", "text": "workflows, data warehousing, business intelligence Process design and automation technologies are being increasingly used by both traditional and newly-formed, Internet-based enterprises in order to improve the quality and efficiency of their administrative and production processes, to manage e-commerce transactions, and to rapidly and reliably deliver services to businesses and individual customers.", "title": "" }, { "docid": "2eb666fcc1c44958906c66aa58ae5946", "text": "Graph clustering or community detection constitutes an important task for investigating the internal structure of graphs, with a plethora of applications in several domains. Traditional tools for graph clustering, such as spectral methods, typically suffer from high time and space complexity. In this article, we present CORECLUSTER, an efficient graph clustering framework based on the concept of graph degeneracy, that can be used along with any known graph clustering algorithm. Our approach capitalizes on processing the graph in a hierarchical manner provided by its core expansion sequence, an ordered partition of the graph into different levels according to the k-core decomposition. Such a partition provides a way to process the graph in an incremental manner that preserves its clustering structure, while making the execution of the chosen clustering algorithm much faster due to the smaller size of the graph’s partitions onto which the algorithm operates. Introduction Detecting clusters or communities in graphs constitutes a cornerstone problem with many applications in several disciplines. Characteristic application domains include social and information network analysis, biological networks, recommender systems and image segmentation. Due to its importance and multidisciplinary nature, the problem of graph clustering has received great attention from the research community and numerous algorithms have been proposed (see (Fortunato 2010) for a survey in the area). Spectral clustering methods (e.g., (Ng, Jordan, and Weiss 2001)) impose a high cost of computing resources both in time and space regardless of the data on which it is going to be applied (Fortunato 2010). Other well-known approaches for community detection are the ones based on modularity optimization (Newman and Girvan 2004; Clauset, Newman, and Moore 2004), stochastic flow simulation (Satuluri and Parthasarathy 2009) and local partitioning methods (Fortunato 2010). In any case, scalability is still a major challenge in the graph clustering task, especially nowadays with the significant increase of the graphs’ size. Typically, there are two main methodologies for scaling up a graph clustering method: (i) algorithm-oriented Copyright c 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and (ii) data-oriented. The first one considers the algorithm of interest and appropriately optimizes – whenever is possible – the “parts” of the algorithm responsible for scalability issues. Prominent examples here are the fast modularity optimization method (Clauset, Newman, and Moore 2004) and the scalable flow-based Markov clustering algorithm (Satuluri and Parthasarathy 2009). The second and widely used methodology is to rely on sampling/sparsification techniques. In this case, the size of the graph onto which the algorithm will operate is reduced, by disregarding nodes/edges. However, in this approach possibly useful structural information of the graph (i.e., nodes/edges) is ignored. In this paper, we propose CORECLUSTER, a graph clustering framework that capitalizes on the notion of graph degeneracy – also known as k-core decomposition (Seidman 1983). The main idea behind our approach is to combine any known graph clustering algorithm with an easy-tocompute, clustering-preserving hierarchical representation of the graph – as produced by the k-core decomposition – towards a scalable graph clustering tool. The k-core of a graph is a maximal size subgraph where each node has at least k neighbors in the subgraph (we say that k is the rank of such a core). The maximum k for which a graph contains a k-core is known as its degeneracy. We refer to this core as “the densest core”. Intuitively, the k-core of such a graph is located in its “densest territories”. Based on this idea, we show that the densest cores of a graph are roughly maintaining its clustering structure and thus constitute good starting points (seed subgraphs) for computing it. Given the fact that the size of the densest core of a graph is orders of magnitude smaller than that of the original graph, we apply a clustering algorithm starting from its densest core and then, on the resulting structure, we incrementally cluster the rest of the nodes in the lower rank cores in decreasing order – following the hierarchy produced by the k-core decomposition. The main contributions of this paper are the following: • Clustering Framework: We introduce CORECLUSTER, a scalable degeneracy-based graph clustering framework, that can be used along with any known graph clustering algorithm. We show how CORECLUSTER utilizes the kcore decomposition of a graph in order to (i) select seed subgraphs for starting the clustering process and (ii) expand the already formed clusters or create new ones. • Scalability and Accuracy Analysis: We discuss analytically the ability of CORECLUSTER to scale-up, describing its expected running time. We also justify why the k-core structure captures the clustering properties of a graph, thus being able to indicate good seed subgraphs for a clustering algorithm. • Experiments: We perform an extensive experimental evaluation regarding the efficiency and accuracy of CORECLUSTER, both on synthetic and real-world graphs. The experimental results show that the time complexity is improved by 3-4 orders of magnitude (compared to a baseline algorithm), especially for large graphs. Moreover, for graphs with inherent community structure, the quality of the results is maintained or even is improved.", "title": "" }, { "docid": "aef76a8375b12f4c38391093640a704a", "text": "Storytelling plays an important role in human life, from everyday communication to entertainment. Interactive storytelling (IS) offers its audience an opportunity to actively participate in the story being told, particularly in video games. Managing the narrative experience of the player is a complex process that involves choices, authorial goals and constraints of a given story setting (e.g., a fairy tale). Over the last several decades, a number of experience managers using artificial intelligence (AI) methods such as planning and constraint satisfaction have been developed. In this paper, we extend existing work and propose a new AI experience manager called player-specific automated storytelling (PAST), which uses automated planning to satisfy the story setting and authorial constraints in response to the player's actions. Out of the possible stories algorithmically generated by the planner in response, the one that is expected to suit the player's style best is selected. To do so, we employ automated player modeling. We evaluate PAST within a video-game domain with user studies and discuss the effects of combining planning and player modeling on the player's perception of agency.", "title": "" }, { "docid": "087b1951ec35db6de6f4739404277913", "text": "A possible scenario for the evolution of Television Broadcast is the adoption of 8 K resolution video broadcasting. To achieve the required bit-rates MIMO technologies are an actual candidate. In this scenario, this paper collected electric field levels from a MIMO experimental system for TV broadcasting to tune the parameters of the ITU-R P.1546 propagation model, which has been employed to model VHF and UHF broadcast channels. The parameters are tuned for each polarization alone and for both together. This is done considering multiple reception points and also a larger capturing time interval for a fixed reception site. Significant improvements on the match between the actual and measured link budget are provided by the optimized parameters.", "title": "" }, { "docid": "09436ce5064f5e828a0d1f1656608de3", "text": "Psychometric modeling using digital data traces is a growing field of research with a breadth of potential applications in marketing, personalization and psychological assessment. We present a novel form of digital traces for user modeling: temporal patterns of smartphone and personal computer activity. We show that some temporal activity metrics are highly correlated with certain Big Five personality metrics. We then present a machine learning method for binary classification of each Big Five personality trait using these temporal activity patterns of both computer and smartphones as model features. Our initial findings suggest that Extroversion, Openness, Agreeableness, and Neuroticism can be classified using temporal patterns of digital traces at a similar accuracy to previous research that classified personality traits using different types of digital traces.", "title": "" }, { "docid": "9118bd3f700d197a3fc8ca08204abd45", "text": "CONTEXT\nThe incidence of localised prostate cancer is increasing worldwide. In light of recent evidence, current, radical, whole-gland treatments for organ-confined disease have being questioned with respect to their side effects, cancer control, and cost. Focal therapy may be an effective alternative strategy.\n\n\nOBJECTIVE\nTo systematically review the existing literature on baseline characteristics of the target population; preoperative evaluation to localise disease; and perioperative, functional, and disease control outcomes following focal therapy.\n\n\nEVIDENCE ACQUISITION\nMedline (through PubMed), Embase, Web of Science, and Cochrane Review databases were searched from inception to 31 October 2012. In addition, registered but not yet published trials were retrieved. Studies evaluating tissue-preserving therapies in men with biopsy-proven prostate cancer in the primary or salvage setting were included.\n\n\nEVIDENCE SYNTHESIS\nA total of 2350 cases were treated to date across 30 studies. Most studies were retrospective with variable standards of reporting, although there was an increasing number of prospective registered trials. Focal therapy was mainly delivered to men with low and intermediate disease, although some high-risk cases were treated that had known, unilateral, significant cancer. In most of the cases, biopsy findings were correlated to specific preoperative imaging, such as multiparametric magnetic resonance imaging or Doppler ultrasound to determine eligibility. Follow-up varied between 0 and 11.1 yr. In treatment-naïve prostates, pad-free continence ranged from 95% to 100%, erectile function ranged from 54% to 100%, and absence of clinically significant cancer ranged from 83% to 100%. In focal salvage cases for radiotherapy failure, the same outcomes were achieved in 87.2-100%, 29-40%, and 92% of cases, respectively. Biochemical disease-free survival was reported using a number of definitions that were not validated in the focal-therapy setting.\n\n\nCONCLUSIONS\nOur systematic review highlights that, when focal therapy is delivered with intention to treat, the perioperative, functional, and disease control outcomes are encouraging within a short- to medium-term follow-up. Focal therapy is a strategy by which the overtreatment burden of the current prostate cancer pathway could be reduced, but robust comparative effectiveness studies are now required.", "title": "" } ]
scidocsrr
3bd533a37441497ae2b6fd1f8abe8f6e
A resilience-based framework for evaluating adaptive co-management : Linking ecology , economics and society in a complex world
[ { "docid": "5eb526843c41d2549862b60c17110b5b", "text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.", "title": "" } ]
[ { "docid": "9a87f11fed489f58b0cdd15b329e5245", "text": "BACKGROUND\nBracing is an effective strategy for scoliosis treatment, but there is no consensus on the best type of brace, nor on the way in which it should act on the spine to achieve good correction. The aim of this paper is to present the family of SPoRT (Symmetric, Patient-oriented, Rigid, Three-dimensional, active) braces: Sforzesco (the first introduced), Sibilla and Lapadula.\n\n\nMETHODS\nThe Sforzesco brace was developed following specific principles of correction. Due to its overall symmetry, the brace provides space over pathological depressions and pushes over elevations. Correction is reached through construction of the envelope, pushes, escapes, stops, and drivers. The real novelty is the drivers, introduced for the first time with the Sforzesco brace; they allow to achieve the main action of the brace: a three-dimensional elongation pushing the spine in a down-up direction.Brace prescription is made plane by plane: frontal (on the \"slopes\", another novelty of this concept, i.e. the laterally flexed sections of the spine), horizontal, and sagittal. The brace is built modelling the trunk shape obtained either by a plaster cast mould or by CAD-CAM construction. Brace checking is essential, since SPoRT braces are adjustable and customisable according to each individual curve pattern.Treatment time and duration is individually tailored (18-23 hours per day until Risser 3, then gradual reduction). SEAS (Scientific Exercises Approach to Scoliosis) exercises are a key factor to achieve success.\n\n\nRESULTS\nThe Sforzesco brace has shown to be more effective than the Lyon brace (matched case/control), equally effective as the Risser plaster cast (prospective cohort with retrospective controls), more effective than the Risser cast + Lyon brace in treating curves over 45 degrees Cobb (prospective cohort), and is able to improve aesthetic appearance (prospective cohort).\n\n\nCONCLUSIONS\nThe SPoRT concept of bracing (three-dimensional elongation pushing in a down-up direction) is different from the other corrective systems: 3-point, traction, postural, and movement-based. The Sforzesco brace, being comparable to casting, may be the best brace for the worst cases.", "title": "" }, { "docid": "0158d18bbe621196f144ae9ed4b5db2d", "text": "We introduce a novel metric for speech recognition success in voice search tasks, designed to reflect the impact of speech recognition errors on user's overall experience with the system. The computation of the metric is seeded using intuitive labels from human subjects and subsequently automated by replacing human annotations with a machine learning algorithm. The results show that search-based recognition accuracy is significantly higher than accuracy based on sentence error rate computation, and that the automated system is very successful in replicating human judgments regarding search quality results.", "title": "" }, { "docid": "5923cd462b5b09a3aabd0fbf5c36f00c", "text": "Exoskeleton robots are used as assistive limbs for elderly persons, rehabilitation for paralyzed persons or power augmentation purposes for healthy persons. The similarity of the exoskeleton robots and human body neuro-muscular system maximizes the device performance. Human body neuro-muscular system provides a flexible and safe movement capability with minimum energy consumption by varying the stiffness of the human joints regularly. Similar to human body, variable stiffness actuators should be used to provide a flexible and safe movement capability in exoskeletons. In the present day, different types of variable stiffness actuator designs are used, and the studies on these actuators are still continuing rapidly. As exoskeleton robots are mobile devices working with the equipment such as batteries, the motors used in the design are expected to have minimal power requirements. In this study, antagonistic, pre-tension and controllable transmission ratio type variable stiffness actuators are compared in terms of energy efficiency and power requirement at an optimal (medium) walking speed for ankle joint. In the case of variable stiffness, the results show that the controllable transmission ratio type actuator compared with the antagonistic design is more efficient in terms of energy consumption and power requirement.", "title": "" }, { "docid": "3fd52b589a58f449ab1c03a19a034a2d", "text": "This paper presents a low-power high-bit-rate phase modulator based on a digital PLL with single-bit TDC and two-point injection scheme. At high bit rates, this scheme requires a controlled oscillator with wide tuning range and becomes critically sensitive to the delay spread between the two injection paths, considerably degrading the achievable error-vector magnitude and causing significant spectral regrowth. A multi-capacitor-bank oscillator topology with an automatic background regulation of the gains of the banks and a digital adaptive filter for the delay-spread correction are introduced. The phase modulator fabricated in a 65-nm CMOS process synthesizes carriers in the 2.9-to-4.0-GHz range from a 40-MHz crystal reference and it is able to produce a phase change up to ±π with 10-bit resolution in a single reference cycle. Measured EVM at 3.6 GHz is -36 dB for a 10-Mb/s GMSK and a 20-Mb/s QPSK modulation. Power dissipation is 5 mW from a 1.2-V voltage supply, leading to a total energy consumption of 0.25 nJ/bit.", "title": "" }, { "docid": "7394baa66902d1330cd0fbf27c0b0d98", "text": "With the world turning into a global village due to technological advancements, automation in all aspects of life is gaining momentum. Wireless technologies address the everincreasing demands of portable and flexible communications. Wireless ad-hoc networks, which allow communication between devices without the need for any central infrastructure, are gaining significance, particularly for monitoring and surveillance applications. A relatively new research area of ad-hoc networks is flying ad-hoc networks (FANETs), governing the autonomous movement of unmanned aerial vehicles (UAVs) [1]. In such networks multiple UAVs are allowed to communicate so that an ad-hoc network is established between them. All UAVs in the network carry UAV-to-UAV communication and only groups of UAVs interact with the ground station. This feature eliminates the need for deployment of complex hardware in each UAV. Moreover, even if one of the UAV communication links breaks down; there is no link breakage with the base station due to the ad-hoc network between UAVs.", "title": "" }, { "docid": "9b5eca94a1e02e97e660d0f5e445a8a1", "text": "PURPOSE\nThe purpose of this study was to evaluate the effect of individualized repeated intravitreal injections of ranibizumab (Lucentis, Genentech, South San Francisco, CA) on visual acuity and central foveal thickness (CFT) for branch retinal vein occlusion-induced macular edema.\n\n\nMETHODS\nThis study was a prospective interventional case series. Twenty-eight eyes of 28 consecutive patients diagnosed with branch retinal vein occlusion-related macular edema treated with repeated intravitreal injections of ranibizumab (when CFT was >225 microm) were evaluated. Optical coherence tomography and fluorescein angiography were performed monthly.\n\n\nRESULTS\nThe mean best-corrected distance visual acuity improved from 62.67 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.74 +/- 0.28 [mean +/- standard deviation]) at baseline to 76.8 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.49 +/- 0.3; statistically significant, P < 0.001) at the end of the follow-up (9 months). The mean letter gain (including the patients with stable and worse visual acuities) was 14.3 letters (2.9 lines). During the same period, 22 of the 28 eyes (78.6%) showed improved visual acuity, 4 (14.2%) had stable visual acuity, and 2 (7.14%) had worse visual acuity compared with baseline. The mean CFT improved from 349 +/- 112 microm at baseline to 229 +/- 44 microm (significant, P < 0.001) at the end of follow-up. A mean of six injections was performed during the follow-up period. Our subgroup analysis indicated that patients with worse visual acuity at presentation (<or=50 letters in our series) showed greater visual benefit from treatment. \"Rebound\" macular edema was observed in 5 patients (17.85%) at the 3-month follow-up visit and in none at the 6- and 9-month follow-ups. In 18 of the 28 patients (53.6%), the CFT was <225 microm at the last follow-up visit, and therefore, further treatment was not instituted. No ocular or systemic side effects were noted.\n\n\nCONCLUSION\nIndividualized repeated intravitreal injections of ranibizumab showed promising short-term results in visual acuity improvement and decrease in CFT in patients with macular edema associated with branch retinal vein occlusion. Further studies are needed to prove the long-term effect of ranibizumab treatment on patients with branch retinal vein occlusion.", "title": "" }, { "docid": "b8217034df7563c8b6c0b3191ab8232a", "text": "BACKGROUND\nA holistic approach to health requires the development of tools that would allow to measure the inner world of individuals within its physical, mental and social dimensions.\n\n\nOBJECTIVES\nTo create the Physical, Mental and Social Well-being scale (PMSW-21) that allows a holistic representation of various dimensions of well-being in such a way as they are perceived by the individuals and how affected their health.\n\n\nMATERIAL AND METHODS\nThe study was conducted on the sample of 406 inhabitants of Warsaw involving in the Social Participation in Health Reform project. The PMSW-21 scale included: headache, tiredness, abdominal pain, palpitation, joint pain, backache, sleep disturbance (physical domain), anxiety, guiltiness, helplessness, hopelessness, sadness, self-dissatisfaction, hostility (mental domain), security, communicability, protection, loneliness, rejection, sociability and appreciation (social domain). The five criterial variables of health and seven of life experiences were adopted to assess the discriminative power of the PMSW-21 scale.\n\n\nRESULTS\nThe total well-being scale as well as its physical, mental and social domains showed high reliability (Cronbach a 0.81, 0.77, 0.90, 0.72, respectively). The analysis confirmed the construct validity. All the items stronger correlated with their own domain than with the others (ranges for physical: 0.41 - 0.55, mental: 0.49 - 0.80 and social: 0.31 - 0.50). The total scale demonstrate high sensitivity; it significantly differentiated almost all criterial variables. Physical domain showed high sensitivity for health as well as for negative life events variables, while the mental and social domains were more sensitive for life events.\n\n\nCONCLUSIONS\nThe analysis confirmed the usefulness of PMSW-21 scale for measure the holistic well-being. The reliability of the total scale and its domains, construct validity and sensitivity for health and life determinants were at acceptable level.", "title": "" }, { "docid": "63685d4935ae48e36d6d83073cd50616", "text": "Graphs provide a powerful means for representing complex interactions between entities. Recently, new deep learning approaches have emerged for representing and modeling graphstructured data while the conventional deep learning methods, such as convolutional neural networks and recurrent neural networks, have mainly focused on the grid-structured inputs of image and audio. Leveraged by representation learning capabilities, deep learning-based techniques can detect structural characteristics of graphs, giving promising results for graph applications. In this paper, we attempt to advance deep learning for graph-structured data by incorporating another component: transfer learning. By transferring the intrinsic geometric information learned in the source domain, our approach can construct a model for a new but related task in the target domain without collecting new data and without training a new model from scratch. We thoroughly tested our approach with large-scale real-world text data and confirmed the effectiveness of the proposed transfer learning framework for deep learning on graphs. According to our experiments, transfer learning is most effective when the source and target domains bear a high level of structural similarity in their graph representations.", "title": "" }, { "docid": "c447ef57b190d129b5a44597c4d2ed80", "text": "As most pancreatic neuroendocrine tumors (PNET) are relatively small and solitary, they may be considered well suited for removal by a minimally invasive approach. There are few large series that describe laparoscopic surgery for PNET. The primary aim of this study was to describe the feasibility, outcome, and histopathology associated with laparoscopic pancreatic surgery (LS) of PNET in a large series. All patients with PNET who underwent LS at a single hospital from March 1997 to April 2011 were included retrospectively in the study. A total of 72 patients with PNET underwent 75 laparoscopic procedures, out of which 65 were laparoscopic resections or enucleations. The median operative time of all patients who underwent resections or enucleations was 175 (60–520) min, the median blood loss was 300 (5–2,700) ml, and the median length of hospital stay was 7 (2–27) days. The overall morbidity rate was 42 %, with a surgical morbidity rate of 21 % and postoperative pancreatic fistula (POPF) formation in 21 %. Laparoscopic enucleations were associated with a higher rate of POPF than were laparoscopic resections. Five-year disease-specific survival rate was 90 %. The T stage, R stage, and a Ki-67 cutoff value of 5 % significantly predicted 5-year survival. LS of PNET is feasible with acceptable morbidity and a good overall disease-specific long-term prognosis.", "title": "" }, { "docid": "9a1665cff530d93c84598e7df947099f", "text": "The algorithmic Markov condition states that the most likely causal direction between two random variables X and Y can be identified as the direction with the lowest Kolmogorov complexity. This notion is very powerful as it can detect any causal dependency that can be explained by a physical process. However, due to the halting problem, it is also not computable. In this paper we propose an computable instantiation that provably maintains the key aspects of the ideal. We propose to approximate Kolmogorov complexity via the Minimum Description Length (MDL) principle, using a score that is mini-max optimal with regard to the model class under consideration. This means that even in an adversarial setting, the score degrades gracefully, and we are still maximally able to detect dependencies between the marginal and the conditional distribution. As a proof of concept, we propose CISC, a linear-time algorithm for causal inference by stochastic complexity, for pairs of univariate discrete variables. Experiments show that CISC is highly accurate on synthetic, benchmark, as well as real-world data, outperforming the state of the art by a margin, and scales extremely well with regard to sample and domain sizes.", "title": "" }, { "docid": "91b49384769b178b300f2e3a4bd0b265", "text": "The recently proposed self-ensembling methods have achieved promising results in deep semi-supervised learning, which penalize inconsistent predictions of unlabeled data under different perturbations. However, they only consider adding perturbations to each single data point, while ignoring the connections between data samples. In this paper, we propose a novel method, called Smooth Neighbors on Teacher Graphs (SNTG). In SNTG, a graph is constructed based on the predictions of the teacher model, i.e., the implicit self-ensemble of models. Then the graph serves as a similarity measure with respect to which the representations of \"similar\" neighboring points are learned to be smooth on the low-dimensional manifold. We achieve state-of-the-art results on semi-supervised learning benchmarks. The error rates are 9.89%, 3.99% for CIFAR-10 with 4000 labels, SVHN with 500 labels, respectively. In particular, the improvements are significant when the labels are fewer. For the non-augmented MNIST with only 20 labels, the error rate is reduced from previous 4.81% to 1.36%. Our method also shows robustness to noisy labels.", "title": "" }, { "docid": "5f77e21de8f68cba79fc85e8c0e7725e", "text": "We introduce structured prediction energy networks (SPENs), a flexible framework for structured prediction. A deep architecture is used to define an energy function of candidate labels, and then predictions are produced by using backpropagation to iteratively optimize the energy with respect to the labels. This deep architecture captures dependencies between labels that would lead to intractable graphical models, and performs structure learning by automatically learning discriminative features of the structured output. One natural application of our technique is multi-label classification, which traditionally has required strict prior assumptions about the interactions between labels to ensure tractable learning and prediction problems. We are able to apply SPENs to multi-label problems with substantially larger label sets than previous applications of structured prediction, while modeling high-order interactions using minimal structural assumptions. Overall, deep learning provides remarkable tools for learning features of the inputs to a prediction problem, and this work extends these techniques to learning features of structured outputs. Our experiments provide impressive performance on a variety of benchmark multi-label classification tasks, demonstrate that our technique can be used to provide interpretable structure learning, and illuminate fundamental trade-offs between feed-forward and iterative structured prediction techniques.", "title": "" }, { "docid": "bdcb688bc914307d811114b2749e47c2", "text": "E-government initiatives are in their infancy in many developing countries. The success of these initiatives is dependent on government support as well as citizens' adoption of e-government services. This study adopted the unified of acceptance and use of technology (UTAUT) model to explore factors that determine the adoption of e-government services in a developing country, namely Kuwait. 880 students were surveyed, using an amended version of the UTAUT model. The empirical data reveal that performance expectancy, effort expectancy and peer influence determine students' behavioural intention. Moreover, facilitating conditions and behavioural intentions determine students' use of e-government services. Implications for decision makers and suggestions for further research are also considered in this study.", "title": "" }, { "docid": "78fecd65b909fbdfeb4b3090b2dadc01", "text": "Advances in antenna technologies for cellular hand-held devices have been synchronous with the evolution of mobile phones over nearly 40 years. Having gone through four major wireless evolutions [1], [2], starting with the analog-based first generation to the current fourth-generation (4G) mobile broadband, technologies from manufacturers and their wireless network capacities today are advancing at unprecedented rates to meet our unrelenting service demands. These ever-growing demands, driven by exponential growth in wireless data usage around the globe [3], have gone hand in hand with major technological milestones achieved by the antenna design community. For instance, realizing the theory regarding the physical limitation of antennas [4]-[6] was paramount to the elimination of external antennas for mobile phones in the 1990s. This achievement triggered a variety of revolutionary mobile phone designs and the creation of new wireless services, establishing the current cycle of cellular advances and advances in mobile antenna technologies.", "title": "" }, { "docid": "7716409441fb8e34013d3e9f58d32476", "text": "Decentralized partially observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. Multi-agent reinforcement learning (MARL) based approaches have been recently proposed for distributed solution of during learning and policy execution are identical. In some practical scenarios this may not be the case. We propose a novel MARL approach in which agents are allowed to rehearse with information that will not be available during policy execution. The key is for the agents to learn policies that do not explicitly rely on these rehearsal features. We also establish a weak convergence result for our algorithm, RLaR, demonstrating that RLaR converges in probability when certain conditions are met. We show experimentally that incorporating rehearsal features can enhance the learning rate compared to non-rehearsalbased learners, and demonstrate fast, (near) optimal performance on many existing benchmark DecPOMDP problems. We also compare RLaR against an existing approximate Dec-POMDP solver which, like RLaR, does not assume a priori knowledge of the model. While RLaR's policy representation is not as scalable, we show that RLaR produces higher quality policies for most problems and horizons studied. & 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5034984717b3528f7f47a1f88a3b1310", "text": "ALL RIGHTS RESERVED. This document contains material protected under International and Federal Copyright Laws and Treaties. Any unauthorized reprint or use of this material is prohibited. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without express written permission from the author / publisher.", "title": "" }, { "docid": "68f0bdda44beba9203a785b8be1035bb", "text": "Nasal mucociliary clearance is one of the most important factors affecting nasal delivery of drugs and vaccines. This is also the most important physiological defense mechanism inside the nasal cavity. It removes inhaled (and delivered) particles, microbes and substances trapped in the mucus. Almost all inhaled particles are trapped in the mucus carpet and transported with a rate of 8-10 mm/h toward the pharynx. This transport is conducted by the ciliated cells, which contain about 100-250 motile cellular appendages called cilia, 0.3 µm wide and 5 µm in length that beat about 1000 times every minute or 12-15 Hz. For efficient mucociliary clearance, the interaction between the cilia and the nasal mucus needs to be well structured, where the mucus layer is a tri-layer: an upper gel layer that floats on the lower, more aqueous solution, called the periciliary liquid layer and a third layer of surfactants between these two main layers. Pharmacokinetic calculations of the mucociliary clearance show that this mechanism may account for a substantial difference in bioavailability following nasal delivery. If the formulation irritates the nasal mucosa, this mechanism will cause the irritant to be rapidly diluted, followed by increased clearance, and swallowed. The result is a much shorter duration inside the nasal cavity and therefore less nasal bioavailability.", "title": "" }, { "docid": "e4b02298a2ff6361c0a914250f956911", "text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.", "title": "" }, { "docid": "45bb7a2e45bdf752acf8bb86e6871058", "text": "Combination of formal and semi-formal methods is more and more required to produce specifications that can be, on the one hand, understood and thus validated by both designers and users and, on the other hand, precise enough to be verified by formal methods. This motivates our aim to use these complementary paradigms in order to deal with security aspects of information systems. This paper presents a methodology to specify access control policies starting with a set of graphical diagrams: UML for the functional model, SecureUML for static access control and ASTD for dynamic access control. These diagrams are then translated into a set of B machines. Finally, we present the formal specification of an access control filter that coordinates the different kinds of access control rules and the specification of functional operations. The goal of such B specifications is to rigorously check the access control policy of an information system taking advantage of tools from the B method.", "title": "" }, { "docid": "7b3b559c3263a6093b7c7d627501b800", "text": "We propose to tackle the problem of RGB-D image disocclusion inpainting when synthesizing new views of a scene by changing its viewpoint. Indeed, such a process creates holes both in depth and color images. First, we propose a novel algorithm to perform depth-map disocclusion inpainting. Our intuitive approach works particularly well for recovering the lost structures of the objects and to inpaint the depth-map in a geometrically plausible manner. Then, we propose a depth-guided patch-based inpainting method to fill-in the color image. Depth information coming from the reconstructed depth-map is added to each key step of the classical patch-based algorithm from Criminisi et al. in an intuitive manner. Relevant comparisons to the state-of-the-art inpainting methods for the disocclusion inpainting of both depth and color images are provided and illustrate the effectiveness of our proposed algorithms.", "title": "" } ]
scidocsrr
4220bbcffee9633a6e8b66d032440313
An efficient technique for author name disambiguation
[ { "docid": "7f57322b6e998d629d1a67cd5fb28da9", "text": "Background: We recently described “Author-ity,” a model for estimating the probability that two articles in MEDLINE, sharing the same author name, were written by the same individual. Features include shared title words, journal name, coauthors, medical subject headings, language, affiliations, and author name features (middle initial, suffix, and prevalence in MEDLINE). Here we test the hypothesis that the Author-ity model will suffice to disambiguate author names for the vast majority of articles in MEDLINE. Methods: Enhancements include: (a) incorporating first names and their variants, email addresses, and correlations between specific last names and affiliation words; (b) new methods of generating large unbiased training sets; (c) new methods for estimating the prior probability; (d) a weighted least squares algorithm for correcting transitivity violations; and (e) a maximum likelihood based agglomerative algorithm for computing clusters of articles that represent inferred author-individuals. Results: Pairwise comparisons were computed for all author names on all 15.3 million articles in MEDLINE (2006 baseline), that share last name and first initial, to create Author-ity 2006, a database that has each name on each article assigned to one of 6.7 million inferred author-individual clusters. Recall is estimated at ∼98.8%. Lumping (putting two different individuals into the same cluster) affects ∼0.5% of clusters, whereas splitting (assigning articles written by the same individual to >1 cluster) affects ∼2% of articles. Impact: The Author-ity model can be applied generally to other bibliographic databases. Author name disambiguation allows information retrieval and data integration to become person-centered, not just document-centered, setting the stage for new data mining and social network tools that will facilitate the analysis of scholarly publishing and collaboration behavior. Availability: The Author-ity 2006 database is available for nonprofit academic research, and can be freely queried via http://arrowsmith.psych.uic.edu.", "title": "" }, { "docid": "984f7a2023a14efbbd5027abfc12a586", "text": "Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.", "title": "" } ]
[ { "docid": "6b3cdd024b6232e5226cae2c15463509", "text": "Blended learning involves the combination of two fields of concern: education and educational technology. To gain the scholarly recognition from educationists, it is necessary to revisit its models and educational theory underpinned. This paper respond to this issue by reviewing models related to blended learning based on two prominent educational theorists, Maslow’s and Vygotsky’s view. Four models were chosen due to their holistic ideas or vast citations related to blended learning: (1) E-Moderation Model emerging from Open University of UK; (2) Learning Ecology Model by Sun Microsoft System; (3) Blended Learning Continuum in University of Glamorgan; and (4) Inquirybased Framework by Garrison and Vaughan. The discussion of each model concerning pedagogical impact to learning and teaching are made. Critical review of the models in accordance to Maslow or Vygotsky is argued. Such review is concluded with several key principles for the design and practice in", "title": "" }, { "docid": "45e93443f9479744404d950d8649f1c9", "text": "In this paper we investigate the impact of candidate terms filtering using linguistic information on the accuracy of automatic keyphrase extraction from scientific papers. According to linguistic knowledge, the noun phrases are most likely to be keyphrases. However the definition of a noun phrase can vary from a system to another. We have identified five POS tag sequence definitions of a noun phrase in keyphrase extraction literature and proposed a new definition. We estimated experimentally the accuracy of a keyphrase extraction system using different noun phrase filters in order to determine which noun phrase definition yields to the best results. Conference Topic Text mining and information extraction", "title": "" }, { "docid": "ef1ca66424bbf52e5029d1599eb02e39", "text": "The pathogenesis of bacterial vaginosis remains largely elusive, although some microorganisms, including Gardnerella vaginalis, are suspected of playing a role in the etiology of this disorder. Recently culture-independent analysis of microbial ecosystems has proven its efficacy in characterizing the diversity of bacterial populations. Here, we report on the results obtained by combining culture and PCR-based methods to characterize the normal and disturbed vaginal microflora. A total of 150 vaginal swab samples from healthy women (115 pregnant and 35 non-pregnant) were categorized on the basis of Gram stain of direct smear as grade I (n = 112), grade II (n = 26), grade III (n = 9) or grade IV (n = 3). The composition of the vaginal microbial community of eight of these vaginal swabs (three grade I, two grade II and three grade III), all from non-pregnant women, were studied by culture and by cloning of the 16S rRNA genes obtained after direct amplification. Forty-six cultured isolates were identified by tDNA-PCR, 854 cloned 16S rRNA gene fragments were analysed of which 156 by sequencing, yielding a total of 38 species, including 9 presumptively novel species with at least five species that have not been isolated previously from vaginal samples. Interestingly, cloning revealed that Atopobium vaginae was abundant in four out of the five non-grade I specimens. Finally, species specific PCR for A. vaginae and Gardnerella vaginalis pointed to a statistically significant co-occurrence of both species in the bacterial vaginosis samples. Although historically the literature regarding bacterial vaginosis has largely focused on G. vaginalis in particular, several findings of this study – like the abundance of A. vaginae in disturbed vaginal microflora and the presence of several novel species – indicate that much is to be learned about the composition of the vaginal microflora and its relation to the etiology of BV.", "title": "" }, { "docid": "0f3cfd3df3022b1afca97f6517e42c58", "text": "BACKGROUND\nSegmental pigmentation disorder (SegPD) is a rare type of cutaneous dyspigmentation. This hereditary disorder, first described some 20 years ago, is characterized by hypo and hyperpigmented patches on the trunk, extremities and less likely on the face and neck. These lesions are considered as a type of checkerboard pattern.\n\n\nCASE PRESENTATION\nHerein, we present a 26-year-old male who presented with hyperpigmented patches on his trunk, neck and upper extremities. Considering the clinical and histopathological findings, the diagnosis of SegPD was confirmed.\n\n\nCONCLUSION\nSegPD is a somewhat neglected entity which should be considered in differential diagnosis of pigmentation disorders.", "title": "" }, { "docid": "3c5a5ee0b855625c959593a08d6e1e24", "text": "We present Scalable Host-tree Embeddings for Efficient Partitioning (Sheep), a distributed graph partitioning algorithm capable of handling graphs that far exceed main memory. Sheep produces high quality edge partitions an order of magnitude faster than both state of the art offline (e.g., METIS) and streaming partitioners (e.g., Fennel). Sheep’s partitions are independent of the input graph distribution, which means that graph elements can be assigned to processing nodes arbitrarily without affecting the partition quality. Sheep transforms the input graph into a strictly smaller elimination tree via a distributed map-reduce operation. By partitioning this tree, Sheep finds an upper-bounded communication volume partitioning of the original graph. We describe the Sheep algorithm and analyze its spacetime requirements, partition quality, and intuitive characteristics and limitations. We compare Sheep to contemporary partitioners and demonstrate that Sheep creates competitive partitions, scales to larger graphs, and has better runtime.", "title": "" }, { "docid": "59d996fe194fb441483a99bd562b110f", "text": "A 6th order fully differential CMOS active-RC inverse-Chebyshev low pass filter with zero capacitor spread is presented in this paper. It provides inverse-Chebyshev response with tunable cutoff frequency from 1MHz to 10MHz by regulating a resistor array. Transmission zero capacitors are replaced by resistors to eliminate the large capacitor spread also in order to reduce the area and mismatch. Modified bandwidth extended op-amp with push-pull output stage is employed to meet the power consumption and bandwidth requirements. A fast tuning circuit using successive approximation register (SAR) controller is adopted to compensate the process voltage temperatue (PVT) variations by tuning a capacitor array. Implemented in 0.13um CMOS technology, the filter draws 6.3mA from a 1.5V power supply. Simulation results show that the inband input-referred noise is 60nV/sqrt(Hz), inband 1dB gain compression point is larger than 13dBm and inband total harmonic distortion (THD) is less than −80dB with 2V differential peak-peak (2Vppd) input signal.", "title": "" }, { "docid": "d615916992e4b8a9b6f3040adace7b44", "text": "The paper presents a new design of dual-mode dielectric-loaded rectangular cavity filters. The response of the filter is mainly controlled by the location and orientation of the coupling apertures with no intra-cavity coupling. Each dual-mode dielectric-loaded cavity generates and controls one transmission zero which can be placed on either side of the passband. Example filters which demonstrate the soundness of the design technique are presented.", "title": "" }, { "docid": "bf7d502a818ac159cf402067b4416858", "text": "We present algorithms for evaluating and performing modeling operatyons on NURBS surfaces using the programmable fragment processor on the Graphics Processing Unit (GPU). We extend our GPU-based NURBS evaluator that evaluates NURBS surfaces to compute exact normals for either standard or rational B-spline surfaces for use in rendering and geometric modeling. We build on these calculations in our new GPU algorithms to perform standard modeling operations such as inverse evaluations, ray intersections, and surface-surface intersections on the GPU. Our modeling algorithms run in real time, enabling the user to sketch on the actual surface to create new features. In addition, the designer can edit the surface by interactively trimming it without the need for re-tessellation. We also present a GPU-accelerated algorithm to perform surface-surface intersection operations with NURBS surfaces that can output intersection curves in the model space as well as in the parametric spaces of both the intersecting surfaces at interactive rates.", "title": "" }, { "docid": "70b9aad14b2fc75dccab0dd98b3d8814", "text": "This paper describes the first phase of an ongoing program of research into theory and practice of IT governance. It conceptually explores existing IT governance literature and reveals diverse definitions of IT governance, that acknowledge its structures, control frameworks and/or processes. The definitions applied within the literature and the nature and breadth of discussion demonstrate a lack of a clear shared understanding of the term IT governance. This lack of clarity has the potential to confuse and possibly impede useful research in the field and limit valid cross-study comparisons of results. Using a content analysis approach, a number of existing diverse definitions are moulded into a \"definitive\" definition of IT governance and its usefulness is critically examined. It is hoped that this exercise will heighten awareness of the \"broad reach\" of the IT governance concept to assist researchers in the development of research projects and more effectively guide practitioners in the overall assessment of IT governance.", "title": "" }, { "docid": "dfaa6e183e70cbacc5c9de501993b7af", "text": "Traditional buildings consume more of the energy resources than necessary and generate a variety of emissions and waste. The solution to overcoming these problems will be to build them green and smart. One of the significant components in the concept of smart green buildings is using renewable energy. Solar energy and wind energy are intermittent sources of energy, so these sources have to be combined with other sources of energy or storage devices. While batteries and/or supercapacitors are an ideal choice for short-term energy storage, regenerative hydrogen-oxygen fuel cells are a promising candidate for long-term energy storage. This paper is to design and test a green building energy system that consists of renewable energy, energy storage, and energy management. The paper presents the architecture of the proposed green building energy system and a simulation model that allows for the study of advanced control strategies for the green building energy system. An example green building energy system is tested and simulation results show that the variety of energy source and storage devices can be managed very well.", "title": "" }, { "docid": "f2d27b79f1ac3809f7ea605203136760", "text": "The Internet of Things (IoT) is a fast-growing movement turning devices into always-connected smart devices through the use of communication technologies. This facilitates the creation of smart strategies allowing monitoring and optimization as well as many other new use cases for various sectors. Low Power Wide Area Networks (LPWANs) have enormous potential as they are suited for various IoT applications and each LPWAN technology has certain features, capabilities and limitations. One of these technologies, namely LoRa/LoRaWAN has several promising features and private and public LoRaWANs are increasing worldwide. Similarly, researchers are also starting to study the potential of LoRa and LoRaWANs. This paper examines the work that has already been done and identifies flaws and strengths by performing a comparison of created testbeds. Limitations of LoRaWANs are also identified.", "title": "" }, { "docid": "74465253e747565358243d037682f978", "text": "This paper presents analytical techniques for the determination of the expressions for the modulation signals used in the carrier-based sinusoidal and generalized discontinuous pulse-width modulation schemes for two-level, three-phase voltage source inverters. The proposed modulation schemes are applicable to inverters generating balanced or unbalanced phase voltages. Some results presented in this paper analytically generalize the several expressions for the modulation signals already reported in the literature and new ones are set forth for generating unbalanced three-phase voltages. Confirmatory experimental and simulation results are provided to illustrate the analyses.", "title": "" }, { "docid": "bbe503ddce5f16bd968e4419d74e805b", "text": "The financial industry has been strongly influenced by digitalization in the past few years reflected by the emergence of “FinTech,” which represents the marriage of “finance” and “information technology.” FinTech provides opportunities for the creation of new services and business models and poses challenges to traditional financial service providers. Therefore, FinTech has become a subject of debate among practitioners, investors, and researchers and is highly visible in the popular media. In this study, we unveil the drivers motivating the FinTech phenomenon perceived by the English and German popular press including the subjects discussed in the context of FinTech. This study is the first one to reflect the media perspective on the FinTech phenomenon in the research. In doing so, we extend the growing knowledge on FinTech and contribute to a common understanding in the financial and digital innovation literature. These study contributes to research in the areas of information systems, finance and interdisciplinary social sciences. Moreover, it brings value to practitioners (entrepreneurs, investors, regulators, etc.), who explore the field of FinTech.", "title": "" }, { "docid": "46d8cb4cb4db93ca54d4df5427a198e2", "text": "Recent advances in machine learning are paving the way for the artificial generation of high quality images and videos. In this paper, we investigate how generating synthetic samples through generative models can lead to information leakage, and, consequently, to privacy breaches affecting individuals’ privacy that contribute their personal or sensitive data to train these models. In order to quantitatively measure privacy leakage, we train a Generative Adversarial Network (GAN), which combines a discriminative model and a generative model, to detect overfitting by relying on the discriminator capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, and show how to improve it through auxiliary knowledge of samples in the dataset. We test our attacks on several state-of-the-art models such as Deep Convolutional GAN (DCGAN), Boundary Equilibrium GAN (BEGAN), and the combination of DCGAN with a Variational Autoencoder (DCGAN+VAE), using datasets consisting of complex representations of faces (LFW) and objects (CIFAR-10). Our white-box attacks are 100% successful at inferring which samples were used to train the target model, while the best black-box attacks can infer training set membership with over 60% accuracy.", "title": "" }, { "docid": "948d808e655f8a7408f6bc46b32af6b0", "text": "Event causality knowledge is indispensable for intelligent natural language understanding. The problem is that any method for extracting event causalities from text is insufficient; it is likely that some event causalities that we can recognize in this world are not written in a corpus, no matter its size. We propose a method of hypothesizing unseen event causalities from known event causalities extracted from the web by the semantic relations between nouns. For example, our method can hypothesize deploy a security camera→avoid crimes from deploy a mosquito net→avoid malaria through semantic relation A PREVENTS B. Our experiments show that, from 2.4 million event causalities extracted from the web, our method generated more than 300,000 hypotheses, which were not in the input, with 70% precision. We also show that our method outperforms a state-ofthe-art hypothesis generation method.", "title": "" }, { "docid": "ac657141ed547f870ad35d8c8b2ba8f5", "text": "Induced by “big data,” “topic modeling” has become an attractive alternative to mapping cowords in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument (“The Leiden Manifesto”) and then upscale to a sample of moderate size (n = 687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.", "title": "" }, { "docid": "99c64c35ecf8b5e6f958e2d288d5941a", "text": "The input to an algorithm that learns a binary classifier normally consists of two sets of examples, where one set consists of positive examples of the concept to be learned, and the other set consists of negative examples. However, it is often the case that the available training data are an incomplete set of positive examples, and a set of unlabeled examples, some of which are positive and some of which are negative. The problem solved in this paper is how to learn a standard binary classifier given a nontraditional training set of this nature.\n Under the assumption that the labeled examples are selected randomly from the positive examples, we show that a classifier trained on positive and unlabeled examples predicts probabilities that differ by only a constant factor from the true conditional probabilities of being positive. We show how to use this result in two different ways to learn a classifier from a nontraditional training set. We then apply these two new methods to solve a real-world problem: identifying protein records that should be included in an incomplete specialized molecular biology database. Our experiments in this domain show that models trained using the new methods perform better than the current state-of-the-art biased SVM method for learning from positive and unlabeled examples.", "title": "" }, { "docid": "9e8cee106c21115188cd2c61c928d658", "text": "In this paper we describe distributed Scrum augmented with best practices in global software engineering (GSE) as an important paradigm for teaching critical competencies in GSE. We report on a globally distributed project course between the University of Victoria, Canada and Aalto University, Finland. The project-driven course involved 16 students in Canada and 9 students in Finland, divided into three cross-site Scrum teams working on a single large project. To assess learning of GSE competencies we employed a mixed-method approach including 13 post-course interviews, pre-, post-course and iteration questionnaires, observations, recordings of Daily Scrums as well as collection of project asynchronous communication data. Our analysis indicates that the Scrum method, along with supporting collaboration practices and tools, supports the learning of important GSE competencies, such as distributed communication and teamwork, building and maintaining trust, using appropriate collaboration tools, and inter-cultural collaboration.", "title": "" }, { "docid": "b00ec93bf47aab14aa8ced69612fc39a", "text": "In today’s increasingly rich material life, people are shifting their focus from the physical world to the spiritual world. In order to identify and care for people’s emotions, human-machine interaction systems have been created. The currently available human-machine interaction systems often support the interaction between human and robot under the line-of-sight (LOS) propagation environment, while most communications in terms of human-to-human and human-to-machine are non-LOS (NLOS). In order to break the limitation of the traditional human–machine interaction system, we propose the emotion communication system based on NLOS mode. Specifically, we first define the emotion as a kind of multimedia which is similar to voice and video. The information of emotion can not only be recognized, but can also be transmitted over a long distance. Then, considering the real-time requirement of the communications between the involved parties, we propose an emotion communication protocol, which provides a reliable support for the realization of emotion communications. We design a pillow robot speech emotion communication system, where the pillow robot acts as a medium for user emotion mapping. Finally, we analyze the real-time performance of the whole communication process in the scene of a long distance communication between a mother-child users’ pair, to evaluate the feasibility and effectiveness of emotion communications.", "title": "" }, { "docid": "036526b572707282a50bc218b72e5862", "text": "Linear classification is a useful tool in machine learning and data mining. For some data in a rich dimensional space, the performance (i.e., testing accuracy) of linear classifiers has shown to be close to that of nonlinear classifiers such as kernel methods, but training and testing speed is much faster. Recently, many research works have developed efficient optimization methods to construct linear classifiers and applied them to some large-scale applications. In this paper, we give a comprehensive survey on the recent development of this active research area.", "title": "" } ]
scidocsrr
956373f0f3a1246d1a9d7580ad6e8c23
Mirror or Megaphone ? : How relationships between narcissism and social networking site use differ on Facebook and Twitter
[ { "docid": "705dbe0e0564b1937da71f33d17164b8", "text": "0191-8869/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.paid.2011.11.011 ⇑ Tel.: +1 309 298 1622; fax: +1 309 298 2369. E-mail address: cj-carpenter2@wiu.edu A survey (N = 292) was conducted that measured self-promoting Facebook behaviors (e.g. posting status updates and photos of oneself, updating profile information) and several anti-social behaviors (e.g. seeking social support more than one provides it, getting angry when people do not comment on one’s status updates, retaliating against negative comments). The grandiose exhibitionism subscale of the narcissistic personality inventory was hypothesized to predict the self-promoting behaviors. The entitlement/exploitativeness subscale was hypothesized to predict the anti-social behaviors. Results were largely consistent with the hypothesis for the self-promoting behaviors but mixed concerning the anti-social behaviors. Trait self-esteem was also related in the opposite manner as the Narcissism scales to some Facebook behaviors. 2011 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "117590d8d7a9c4efb9a19e4cd3e220fc", "text": "We present in this paper the language NoFun for stating component quality in the framework of the ISO/IEC quality standards. The language consists of three different parts. In the first one, software quality characteristics and attributes are defined, probably in a hiera rchical manner. As part of this definition, abstract quality models can be formulated and fu rther refined into more specialised ones. In the second part, values are assigned to component quality basic attributes. In the third one, quality requirements can be stated over components, both context-free (universal quality properties) and context-dependent (quality properties for a given framework -software domain, company, project, etc.). Last, we address to the translation of the language to UML, using its extension mechanisms for capturing the fundamental non-functional concepts.", "title": "" }, { "docid": "55d7db89621dc57befa330c6dea823bf", "text": "In this paper we propose CUDA-based implementations of two 3D point sets registration algorithms: Soft assign and EM-ICP. Both algorithms are known for being time demanding, even on modern multi-core CPUs. Our GPUbased implementations vastly outperform CPU ones. For instance, our CUDA EM-ICP aligns 5000 points in less than 7 seconds on a GeForce 8800GT, while the same implementation in OpenMP on an Intel Core 2 Quad would take 7 minutes.", "title": "" }, { "docid": "ea6cb11966919ff9ef331766974aa4c7", "text": "Verifiable secret sharing is an important primitive in distributed cryptography. With the growing interest in the deployment of threshold cryptosystems in practice, the traditional assumption of a synchronous network has to be reconsidered and generalized to an asynchronous model. This paper proposes the first practical verifiable secret sharing protocol for asynchronous networks. The protocol creates a discrete logarithm-based sharing and uses only a quadratic number of messages in the number of participating servers. It yields the first asynchronous Byzantine agreement protocol in the standard model whose efficiency makes it suitable for use in practice. Proactive cryptosystems are another important application of verifiable secret sharing. The second part of this paper introduces proactive cryptosystems in asynchronous networks and presents an efficient protocol for refreshing the shares of a secret key for discrete logarithm-based sharings.", "title": "" }, { "docid": "9611686ff4eedf047460becec43ce59d", "text": "We propose a novel location-based second-factor authentication solution for modern smartphones. We demonstrate our solution in the context of point of sale transactions and show how it can be effectively used for the detection of fraudulent transactions caused by card theft or counterfeiting. Our scheme makes use of Trusted Execution Environments (TEEs), such as ARM TrustZone, commonly available on modern smartphones, and resists strong attackers, even those capable of compromising the victim phone applications and OS. It does not require any changes in the user behavior at the point of sale or to the deployed terminals. In particular, we show that practical deployment of smartphone-based second-factor authentication requires a secure enrollment phase that binds the user to his smartphone TEE and allows convenient device migration. We then propose two novel enrollment schemes that resist targeted attacks and provide easy migration. We implement our solution within available platforms and show that it is indeed realizable, can be deployed with small software changes, and does not hinder user experience.", "title": "" }, { "docid": "da3e12690fd5bfeb21be374e7aa3a111", "text": "The most common specimens from immunocompromised patients that are analyzed for detection of herpes simplex virus (HSV) or varicella-zoster virus (VZV) are from skin lesions. Many types of assays are applicable to these samples, but some, such as virus isolation and direct fluorescent antibody testing, are useful only in the early phases of the lesions. In contrast, nucleic acid (NA) detection methods, which generally have superior sensitivity and specificity, can be applied to skin lesions at any stage of progression. NA methods are also the best choice, and sometimes the only choice, for detecting HSV or VZV in blood, cerebrospinal fluid, aqueous or vitreous humor, and from mucosal surfaces. NA methods provide the best performance when reliability and speed (within 24 hours) are considered together. They readily distinguish the type of HSV detected or the source of VZV detected (wild type or vaccine strain). Nucleic acid detection methods are constantly being improved with respect to speed and ease of performance. Broader applications are under study, such as the use of quantitative results of viral load for prognosis and to assess the efficacy of antiviral therapy.", "title": "" }, { "docid": "450f13659ece54bee1b4fe61cc335eb2", "text": "Though considerable effort has recently been devoted to hardware realization of one-dimensional chaotic systems, the influence of implementation inaccuracies is often underestimated and limited to non-idealities in the non-linear map. Here we investigate the consequences of sample-and-hold errors. Two degrees of freedom in the design space are considered: the choice of the map and the sample-and-hold architecture. Current-mode systems based on Bernoulli Shift, on Tent Map and on Tailed Tent Map are taken into account and coupled with an order-one model of sample-and-hold to ascertain error causes and suggest implementation improvements. key words: chaotic systems, analog circuits, sample-and-hold errors", "title": "" }, { "docid": "fe70c7614c0414347ff3c8bce7da47e7", "text": "We explore a model of stress prediction in Russian using a combination of local contextual features and linguisticallymotivated features associated with the word’s stem and suffix. We frame this as a ranking problem, where the objective is to rank the pronunciation with the correct stress above those with incorrect stress. We train our models using a simple Maximum Entropy ranking framework allowing for efficient prediction. An empirical evaluation shows that a model combining the local contextual features and the linguistically-motivated non-local features performs best in identifying both primary and secondary stress.", "title": "" }, { "docid": "598dbf48c54bcea6e74d85a8393dada1", "text": "With the fast development of social media, the information overload problem becomes increasingly severe and recommender systems play an important role in helping online users find relevant information by suggesting information of potential interests. Social activities for online users produce abundant social relations. Social relations provide an independent source for recommendation, presenting both opportunities and challenges for traditional recommender systems. Users are likely to seek suggestions from both their local friends and users with high global reputations, motivating us to exploit social relations from local and global perspectives for online recommender systems in this paper. We develop approaches to capture local and global social relations, and propose a novel framework LOCABAL taking advantage of both local and global social context for recommendation. Empirical results on real-world datasets demonstrate the effectiveness of our proposed framework and further experiments are conducted to understand how local and global social context work for the proposed framework.", "title": "" }, { "docid": "83bec63fb2932aec5840a9323cc290b4", "text": "This paper extends fully-convolutional neural networks (FCN) for the clothing parsing problem. Clothing parsing requires higher-level knowledge on clothing semantics and contextual cues to disambiguate fine-grained categories. We extend FCN architecture with a side-branch network which we refer outfit encoder to predict a consistent set of clothing labels to encourage combinatorial preference, and with conditional random field (CRF) to explicitly consider coherent label assignment to the given image. The empirical results using Fashionista and CFPD datasets show that our model achieves state-of-the-art performance in clothing parsing, without additional supervision during training. We also study the qualitative influence of annotation on the current clothing parsing benchmarks, with our Web-based tool for multi-scale pixel-wise annotation and manual refinement effort to the Fashionista dataset. Finally, we show that the image representation of the outfit encoder is useful for dress-up image retrieval application.", "title": "" }, { "docid": "e84ea9948c573b1f92f11d7c5f5f5ab9", "text": "The formulation of microplane model M4 in Parts I and II is extended to rate dependence. Two types of rate effect in the nonlinear triaxial behavior of concrete are distinguished: (1) Rate dependence of fracturing (microcrack growth) associated with the activation energy of bond ruptures, and (2) creep (or viscoelasticity). Short-time linear creep (viscoelasticity) is approximated by a nonaging Maxwell spring-dashpot model calibrated so that its response at constant stress would be tangent to the compliance function of model B3 for a time delay characteristic of the problem at hand. An effective explicit algorithm for step-by-step finiteelement analysis is formulated. The main reason that the rate dependence of fracturing must be taken into account is to simulate the sudden reversal of postpeak strain softening into hardening revealed by recent tests. The main reason that short-time creep (viscoelasticity) must be taken into account is to simulate the rate dependence of the initial and unloading stiffness. Good approximations of the rate effects observed in material testing are achieved. The model is suitable for finite-element analysis of impact, blast, earthquake, and shorttime loads up to several hours duration.", "title": "" }, { "docid": "8cd87f65a99a3523df41b05db9395a25", "text": "Accurate real time crime prediction is a fundamental issue for public safety, but remains a challenging problem for the scientific community. Crime occurrences depend on many complex factors. Compared to many predictable events, crime is sparse. At different spatiotemporal scales, crime distributions display dramatically different patterns. These distributions are of very low regularity in both space and time. In this work, we adapt the state-of-the-art deep learning spatio-temporal predictor, ST-ResNet [Zhang et al, AAAI, 2017], to collectively predict crime distribution over the Los Angeles area. Our models are two staged. First, we preprocess the raw crime data. This includes regularization in both space and time to enhance predictable signals. Second, we adapt hierarchical structures of residual convolutional units to train multifactor crime prediction models. Experiments over a half year period in Los Angeles reveal highly accurate predictive power of our models.", "title": "" }, { "docid": "6c2a74a6709b5f7355da3afec15cc751", "text": "\"This chapter critically examines the hypothesis that women's rising employment levels have increased their economic independence and hence have greatly reduced the desirability of marriage. Little firm empirical support for this hypothesis is found. The apparent congruence in time-series data of women's rising employment with declining marriage rates and increasing marital instability is partly a result of using the historically atypical early postwar behavior of the baby boom era as the benchmark for comparisons and partly due to confounding trends in delayed marriage with those of nonmarriage.\"", "title": "" }, { "docid": "c3b9b03009f78954f9efff71cdfeed23", "text": "In many software attacks, inducing an illegal control-flow transfer in the target system is one common step. Control-Flow Integrity (CFI) protects a software system by enforcing a pre-determined control-flow graph. In addition to providing strong security, CFI enables static analysis on low-level code. This paper evaluates whether CFI-enabled static analysis can help build efficient and validated data sandboxing. Previous systems generally sandbox memory writes for integrity, but avoid protecting confidentiality due to the high overhead of sandboxing memory reads. To reduce overhead, we have implemented a series of optimizations that remove sandboxing instructions if they are proven unnecessary by static analysis. On top of CFI, our system adds only 2.7% runtime overhead on SPECint2000 for sandboxing memory writes and adds modest 19% for sandboxing both reads and writes. We have also built a principled data-sandboxing verifier based on range analysis. The verifier checks the safety of the results of the optimizer, which removes the need to trust the rewriter and optimizer. Our results show that the combination of CFI and static analysis has the potential of bringing down the cost of general inlined reference monitors, while maintaining strong security.", "title": "" }, { "docid": "855b35e6e4c6f147de71bf0864184d56", "text": "Leveraging large data sets, deep Convolutional Neural Networks (CNNs) achieve state-of-the-art recognition accuracy. Due to the substantial compute and memory operations, however, they require significant execution time. The massive parallel computing capability of GPUs make them as one of the ideal platforms to accelerate CNNs and a number of GPU-based CNN libraries have been developed. While existing works mainly focus on the computational efficiency of CNNs, the memory efficiency of CNNs have been largely overlooked. Yet CNNs have intricate data structures and their memory behavior can have significant impact on the performance. In this work, we study the memory efficiency of various CNN layers and reveal the performance implication from both data layouts and memory access patterns. Experiments show the universal effect of our proposed optimizations on both single layers and various networks, with up to 27.9× for a single layer and up to 5.6× on the whole networks.", "title": "" }, { "docid": "063598613ce313e2ad6d2b0697e0c708", "text": "Contour shape descriptors are among the important shape description methods. Fourier descriptors (FD) and curvature scale space descriptors (CSSD) are widely used as contour shape descriptors for image retrieval in the literature. In MPEG-7, CSSD has been proposed as one of the contour-based shape descriptors. However, no comprehensive comparison has been made between these two shape descriptors. In this paper we study and compare FD and CSSD using standard principles and standard database. The study targets image retrieval application. Our experimental results show that FD outperforms CSSD in terms of robustness, low computation, hierarchical representation, retrieval performance and suitability for efficient indexing.", "title": "" }, { "docid": "5d1fbf1b9f0529652af8d28383ce9a34", "text": "Automatic License Plate Recognition (ALPR) is one of the most prominent tools in intelligent transportation system applications. In ALPR algorithm implementation, License Plate Detection (LPD) is a critical stage. Despite many state-of-the-art researches, some parameters such as low/high illumination, type of camera, or a different style of License Plate (LP) causes LPD step is still a challenging problem. In this paper, we propose a new style-free method based on the cross power spectrum. Our method has three steps; designing adaptive binarized filter, filtering using cross power spectrum and verification. Experimental results show that the recognition accuracy of the proposed approach is 98% among 2241 Iranian cars images including two styles of the LP. In addition, the process of the plate detection takes 44 milliseconds, which is suitable for real-time processing.", "title": "" }, { "docid": "09da7573fbad0b501eb1e834c413a4aa", "text": "We present XGSN, an open-source system that relies on semantic representations of sensor metadata and observations, to guide the process of annotating and publishing sensor data on the Web. XGSN is able to handle the data acquisition process of a wide number of devices and protocols, and is designed as a highly extensible platform, leveraging on the existing capabilities of the Global Sensor Networks (GSN) middleware. Going beyond traditional sensor management systems, XGSN is capable of enriching virtual sensor descriptions with semantically annotated content using standard vocabularies. In the proposed approach, sensor data and observations are annotated using an ontology network based on the SSN ontology, providing a standardized queryable representation that makes it easier to share, discover, integrate and interpret the data. XGSN manages the annotation process for the incoming sensor observations, producing RDF streams that are sent to the cloud-enabled Linked Sensor Middleware, which can internally store the data or perform continuous query processing. The distributed nature of XGSN allows deploying different remote instances that can interchange observation data, so that virtual sensors can be aggregated and consume data from other remote virtual sensors. In this paper we show how this approach has been implemented in XGSN, and incorporated to the wider OpenIoT platform, providing a highly flexible and scalable system for managing the life-cycle of sensor data, from acquisition to publishing, in the context of the semantic Web of Things.", "title": "" }, { "docid": "36a1e7716d6cdac89911ca0b52c019ff", "text": "Some recent sequence-to-sequence models like the Transformer (Vaswani et al., 2017) can score all output posiQons in parallel. We propose a simple algorithmic technique that exploits this property to generate mulQple tokens in parallel at decoding Qme with liTle to no loss in quality. Our fastest models exhibit wall-clock speedups of up to 4x over standard greedy decoding on the tasks of machine translaQon and image super-resoluQon.", "title": "" }, { "docid": "9921540f6fc1ac3a79fc7f203d749509", "text": "We propose a fast algorithm for ridge regression when the number of features is much larger than the number of observations (p≫n). The standard way to solve ridge regression in this setting works in the dual space and gives a running time of O(n2p). Our algorithm Subsampled Randomized Hadamard Transform Dual Ridge Regression (SRHT-DRR) runs in time O(np log(n)) and works by preconditioning the design matrix by a Randomized Walsh-Hadamard Transform with a subsequent subsampling of features. We provide risk bounds for our SRHT-DRR algorithm in the fixed design setting and show experimental results on synthetic and real datasets. Disciplines Computer Sciences | Statistics and Probability | Theory and Algorithms This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/statistics_papers/221 Faster Ridge Regression via the Subsampled Randomized Hadamard Transform Yichao Lu Paramveer S. Dhillon Dean Foster Lyle Ungar Statistics (Wharton School), Computer & Information Science University of Pennsylvania, Philadelphia, PA, U.S.A {dhillon|ungar}@cis.upenn.edu foster@wharton.upenn.edu, yichaolu@sas.upenn.edu", "title": "" }, { "docid": "88e582927c4e4018cb4071eeeb6feff4", "text": "While previous studies have correlated the Dark Triad traits (i.e., narcissism, psychopathy, and Machiavellianism) with a preference for short-term relationships, little research has addressed possible correlations with short-term relationship sub-types. In this online study using Amazon’s Mechanical Turk system (N = 210) we investigated the manner in which scores on the Dark Triad relate to the selection of different mating environments using a budget-allocation task. Overall, the Dark Triad were positively correlated with preferences for short-term relationships and negatively correlated with preferences for a long-term relationship. Specifically, narcissism was uniquely correlated with preferences for one-night stands and friends-with-benefits and psychopathy was uniquely correlated with preferences for bootycall relationships. Both narcissism and psychopathy were negatively correlated with preferences for serious romantic relationships. In mediation analyses, psychopathy partially mediated the sex difference in preferences for booty-call relationships and narcissism partially mediated the sex difference in preferences for one-night stands. In addition, the sex difference in preference for serious romantic relationships was partially mediated by both narcissism and psychopathy. It appears the Dark Triad traits facilitate the adoption of specific mating environments providing fit with people’s personality traits. 2012 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
9f554efb433beaac2877bad31b001999
A Complexity-Invariant Distance Measure for Time Series
[ { "docid": "8ab791e9db930fd27f6459e72a1687e5", "text": "The problem of indexing time series has attracted much interest. Most algorithms used to index time series utilize the Euclidean distance or some variation thereof. However, it has been forcefully shown that the Euclidean distance is a very brittle distance measure. Dynamic time warping (DTW) is a much more robust distance measure for time series, allowing similar shapes to match even if they are out of phase in the time axis. Because of this flexibility, DTW is widely used in science, medicine, industry and finance. Unfortunately, however, DTW does not obey the triangular inequality and thus has resisted attempts at exact indexing. Instead, many researchers have introduced approximate indexing techniques or abandoned the idea of indexing and concentrated on speeding up sequential searches. In this work, we introduce a novel technique for the exact indexing of DTW. We prove that our method guarantees no false dismissals and we demonstrate its vast superiority over all competing approaches in the largest and most comprehensive set of time series indexing experiments ever undertaken.", "title": "" } ]
[ { "docid": "b04ba2e942121b7a32451f0b0f690553", "text": "Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS). Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA) are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed. Electronics 2015, 4 381", "title": "" }, { "docid": "0ccde44cffc4d888668b14370e147529", "text": "Bitcoin is a crypto currency with several advantages over previous approaches. Transactions are con®rmed and stored by a peer-to-peer network in a blockchain. Therefore, all transactions are public and soon solutions where designed to increase privacy in Bitcoin Many come with downsides, like requiring a trusted third-party or requiring modi®cations to Bitcoin. In this paper, we compare these approaches according to several criteria. Based on our ®ndings, CoinJoin emerges as the best approach for anonymizing Bitcoins today.", "title": "" }, { "docid": "66313e7ec725fa6081a9d834ce87cb2e", "text": "In this paper, the DCXO is based on a Pierce oscillator with two MIM capacitor arrays for tuning the anti-resonant frequency of a 19.2MHz crystal. Each array of MIM capacitors is thermometer-coded and formatted in a matrix shape to facilitate layout. Although a segmented architecture is an area-efficient method for implementing a SC array, a thermometer-coded array provides the best linearity and guarantees a monotonic frequency tuning characteristic, which is of utmost importance in an AFC system.", "title": "" }, { "docid": "db1d5903d2d49d995f5d3b6dd0681323", "text": "Diffusion tensor imaging (DTI) is an exciting new MRI modality that can reveal detailed anatomy of the white matter. DTI also allows us to approximate the 3D trajectories of major white matter bundles. By combining the identified tract coordinates with various types of MR parameter maps, such as T2 and diffusion properties, we can perform tract-specific analysis of these parameters. Unfortunately, 3D tract reconstruction is marred by noise, partial volume effects, and complicated axonal structures. Furthermore, changes in diffusion anisotropy under pathological conditions could alter the results of 3D tract reconstruction. In this study, we created a white matter parcellation atlas based on probabilistic maps of 11 major white matter tracts derived from the DTI data from 28 normal subjects. Using these probabilistic maps, automated tract-specific quantification of fractional anisotropy and mean diffusivity were performed. Excellent correlation was found between the automated and the individual tractography-based results. This tool allows efficient initial screening of the status of multiple white matter tracts.", "title": "" }, { "docid": "6d6403713d20db8bc04861bb62c1c216", "text": "• In this lesson, we will learn how to measure the coefficient of correlation for two sets of ranking. • The coefficient of correlation, r, measures the strength of association or correlation between two sets of data that can be measured. Sometimes, the data is not measurable but can only be ordered, as in ranking. • For example, two students can be asked to rank toast, cereals, and dim sum in terms of preference. They are asked to assign rank 1 to their favourite and rank 3 to the choice of breakfast that they like least. • We will use Spearman's Rank Order Correlation Coefficient to calculate the strength of association between the rankings produced by these two students.", "title": "" }, { "docid": "696fd5b7e7bff90432f8c219230ebc7c", "text": "This paper proposes a simple, cost-effective, and efficient brushless dc (BLDC) motor drive for solar photovoltaic (SPV) array-fed water pumping system. A zeta converter is utilized to extract the maximum available power from the SPV array. The proposed control algorithm eliminates phase current sensors and adapts a fundamental frequency switching of the voltage source inverter (VSI), thus avoiding the power losses due to high frequency switching. No additional control or circuitry is used for speed control of the BLDC motor. The speed is controlled through a variable dc link voltage of VSI. An appropriate control of zeta converter through the incremental conductance maximum power point tracking (INC-MPPT) algorithm offers soft starting of the BLDC motor. The proposed water pumping system is designed and modeled such that the performance is not affected under dynamic conditions. The suitability of proposed system at practical operating conditions is demonstrated through simulation results using MATLAB/Simulink followed by an experimental validation.", "title": "" }, { "docid": "c4e521c801a96c9e79c25006f44d7147", "text": "Cal Poly students are participating in the development of a new class of picosatellite, the CubeSat. CubeSats are ideal as space development projects for universities around the world. In addition to their significant role in educating space scientists and engineers, CubeSats provide a low-cost platform for testing and space qualification of the next generation of small payloads in space. A key component of the project is the development of a standard CubeSat deployer. This deployer is capable of releasing a number of CubeSats as secondary payloads on a wide range of launchers. The standard deployer requires all CubeSats to conform to common physical requirements, and share a standard deployer interface. CubeSat development time and cost can be significantly reduced by the development of standards that are shared by a large number of spacecraft.", "title": "" }, { "docid": "4ab7f780e9ae6faacd5d4cfef9ef59f4", "text": "We describe a rapid, sensitive process for comprehensively identifying proteins in macromolecular complexes that uses multidimensional liquid chromatography (LC) and tandem mass spectrometry (MS/MS) to separate and fragment peptides. The SEQUEST algorithm, relying upon translated genomic sequences, infers amino acid sequences from the fragment ions. The method was applied to the Saccharomyces cerevisiae ribosome leading to the identification of a novel protein component of the yeast and human 40S subunit. By offering the ability to identify >100 proteins in a single run, this process enables components in even the largest macromolecular complexes to be analyzed comprehensively.", "title": "" }, { "docid": "486bd67781bb1067aa4ff6009cdeecb7", "text": "BACKGROUND\nThere was less than satisfactory progress, especially in sub-Saharan Africa, towards child and maternal mortality targets of Millennium Development Goals (MDGs) 4 and 5. The main aim of this study was to describe the prevalence and determinants of essential new newborn care practices in the Lawra District of Ghana.\n\n\nMETHODS\nA cross-sectional study was carried out in June 2014 on a sample of 422 lactating mothers and their children aged between 1 and 12 months. A systematic random sampling technique was used to select the study participants who attended post-natal clinic in the Lawra district hospital.\n\n\nRESULTS\nOf the 418 newborns, only 36.8% (154) was judged to have had safe cord care, 34.9% (146) optimal thermal care, and 73.7% (308) were considered to have had adequate neonatal feeding. The overall prevalence of adequate new born care comprising good cord care, optimal thermal care and good neonatal feeding practices was only 15.8%. Mothers who attained at least Senior High Secondary School were 20.5 times more likely to provide optimal thermal care [AOR 22.54; 95% CI (2.60-162.12)], compared to women had no formal education at all. Women who received adequate ANC services were 4.0 times (AOR  =  4.04 [CI: 1.53, 10.66]) and 1.9 times (AOR  =  1.90 [CI: 1.01, 3.61]) more likely to provide safe cord care and good neonatal feeding as compared to their counterparts who did not get adequate ANC. However, adequate ANC services was unrelated to optimal thermal care. Compared to women who delivered at home, women who delivered their index baby in a health facility were 5.6 times more likely of having safe cord care for their babies (AOR = 5.60, Cl: 1.19-23.30), p = 0.03.\n\n\nCONCLUSIONS\nThe coverage of essential newborn care practices was generally low. Essential newborn care practices were positively associated with high maternal educational attainment, adequate utilization of antenatal care services and high maternal knowledge of newborn danger signs. Therefore, greater improvement in essential newborn care practices could be attained through proven low-cost interventions such as effective ANC services, health and nutrition education that should span from community to health facility levels.", "title": "" }, { "docid": "d8ef81ed5cd25c98abb8d94dd769f9aa", "text": "Organic anion transporting polypeptides (OATPs) are a group of membrane transport proteins that facilitate the influx of endogenous and exogenous substances across biological membranes. OATPs are found in enterocytes and hepatocytes and in brain, kidney, and other tissues. In enterocytes, OATPs facilitate the gastrointestinal absorption of certain orally administered drugs. Fruit juices such as grapefruit juice, orange juice, and apple juice contain substances that are OATP inhibitors. These fruit juices diminish the gastrointestinal absorption of certain antiallergen, antibiotic, antihypertensive, and β-blocker drugs. While there is no evidence, so far, that OATP inhibition affects the absorption of psychotropic medications, there is no room for complacency because the field is still nascent and because the necessary studies have not been conducted. Patients should therefore err on the side of caution, taking their medications at least 4 hours distant from fruit juice intake. Doing so is especially desirable with grapefruit juice, orange juice, and apple juice; with commercial fruit juices in which OATP-inhibiting substances are likely to be present in higher concentrations; with calcium-fortified fruit juices; and with medications such as atenolol and fexofenadine, the absorption of which is substantially diminished by concurrent fruit juice intake.", "title": "" }, { "docid": "3d681bd21240f17486ea9f0219b1e85b", "text": "Vision-aided Inertial Navigation Systems (V-INS) can provide precise state estimates for the 3D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved by combining inertial measurements from an IMU with visual observations from a camera under the assumption that the rigid transformation between the two sensors is known. Errors in the IMU-camera calibration process causes biases that reduce the accuracy of the estimation process and can even lead to divergence. In this paper, we present a Kalman filter-based algorithm for precisely determining the unknown transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlations of the IMU measurements and provide a figure of merit (covariance) for the estimated transformation. The proposed method does not require any special hardware (such as spin table or 3D laser scanner) except a calibration target. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy.", "title": "" }, { "docid": "7eec1e737523dc3b78de135fc71b058f", "text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches", "title": "" }, { "docid": "2e5d9f5ae2f631357d8a6ef22cd52b62", "text": "Necrobiosis lipoidica is a rare disorder that usually appears in the lower extremities and it is often related to diabetes mellitus. There are few reported cases of necrobiosis lipoidica in children. We present an interesting case in that the patient developed lesions on the abdomen, which is an unusual location.", "title": "" }, { "docid": "733a7a024f5e408323f9b037828061bb", "text": "Hidden Markov model (HMM) is one of the popular techniques for story segmentation, where hidden Markov states represent the topics, and the emission distributions of n-gram language model (LM) are dependent on the states. Given a text document, a Viterbi decoder finds the hidden story sequence, with a change of topic indicating a story boundary. In this paper, we propose a discriminative approach to story boundary detection. In the HMM framework, we use deep neural network (DNN) to estimate the posterior probability of topics given the bag-ofwords in the local context. We call it the DNN-HMM approach. We consider the topic dependent LM as a generative modeling technique, and the DNN-HMM as the discriminative solution. Experiments on topic detection and tracking (TDT2) task show that DNN-HMM outperforms traditional n-gram LM approach significantly and achieves state-of-the-art performance.", "title": "" }, { "docid": "4fc64e24e9b080ffcc45cae168c2e339", "text": "During real time control of a dynamic system, one needs to design control systems with advanced control strategies to handle inherent nonlinearities and disturbances. This paper deals with the designing of a model reference adaptive control system with the use of MIT rule for real time control of a ball and beam system. This paper uses the gradient theory to develop MIT rule in which one or more parameters of adaptive controller needs to be adjusted so that the plant could track the reference model. A linearized model of ball and beam system is used in this paper to design the controller on MATLAB and the designed controller is then applied for real time control of ball and beam system. Simulations carried on SIMULINK and MATLAB show good performance of the designed adaptive controller in real time.", "title": "" }, { "docid": "2f565b3b6c5f14a12f93ada87f925791", "text": "Cyber attacks prediction is an important part of risk management. Existing cyber attacks prediction methods did not fully consider the specific environment factors of the target network, which may make the results deviate from the true situation. In this paper, we propose a cyber attacks prediction model based on Bayesian network. We use attack graphs to represent all the vulnerabilities and possible attack paths. Then we capture the using environment factors using Bayesian network model. Cyber attacks predictions are performed on the constructed Bayesian network. Experimental analysis shows that our method gets more accurate results.", "title": "" }, { "docid": "d0e584d00c82df795e1d79bd4837ceb9", "text": "Many observations suggest that typical (emotional or orthostatic) vasovagal syncope (VVS) is not a disease, but rather a manifestation of a non-pathological trait. Some authors have hypothesized this type of syncope as a “defense mechanism” for the organism and a few theories have been postulated. Under the human violent conflicts theory, the VVS evolved during the Paleolithic era only in the human lineage. In this evolutionary period, a predominant cause of death was wounding by a sharp object. This theory could explain the occurrence of emotional VVS, but not of the orthostatic one. The clot production theory suggests that the vasovagal reflex is a defense mechanism against hemorrhage in mammals. This theory could explain orthostatic VVS, but not emotional VVS. The brain self-preservation theory is mainly based on the observation that during tilt testing a decrease in cerebral blood flow often precedes the drop in blood pressure and heart rate. The faint causes the body to take on a gravitationally neutral position, and thereby provides a better chance of restoring brain blood supply. However, a decrease in cerebral blood flow has not been demonstrated during negative emotions, which trigger emotional VVS. Under the heart defense theory, the vasovagal reflex seems to be a protective mechanism against sympathetic overactivity and the heart is the most vulnerable organ during this condition. This appears to be the only unifying theory able to explain the occurrence of the vasovagal reflex and its associated selective advantage, during both orthostatic and emotional stress.", "title": "" }, { "docid": "b411fdd1a8f1289c440b2c67c63143f0", "text": "In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.", "title": "" } ]
scidocsrr
aee4e5a21e6e6f8d6ac260c00169e63e
Beauty in a smile: the role of medial orbitofrontal cortex in facial attractiveness
[ { "docid": "e3bb16dfbe54599c83743e5d7f1facc6", "text": "Testosterone-dependent secondary sexual characteristics in males may signal immunological competence and are sexually selected for in several species,. In humans, oestrogen-dependent characteristics of the female body correlate with health and reproductive fitness and are found attractive. Enhancing the sexual dimorphism of human faces should raise attractiveness by enhancing sex-hormone-related cues to youth and fertility in females,, and to dominance and immunocompetence in males,,. Here we report the results of asking subjects to choose the most attractive faces from continua that enhanced or diminished differences between the average shape of female and male faces. As predicted, subjects preferred feminized to average shapes of a female face. This preference applied across UK and Japanese populations but was stronger for within-population judgements, which indicates that attractiveness cues are learned. Subjects preferred feminized to average or masculinized shapes of a male face. Enhancing masculine facial characteristics increased both perceived dominance and negative attributions (for example, coldness or dishonesty) relevant to relationships and paternal investment. These results indicate a selection pressure that limits sexual dimorphism and encourages neoteny in humans.", "title": "" } ]
[ { "docid": "be18a6729dc170fc03b61436c99c843d", "text": "Hepatitis C virus (HCV) is a major cause of liver disease worldwide and a potential cause of substantial morbidity and mortality in the future. The complexity and uncertainty related to the geographic distribution of HCV infection and chronic hepatitis C, determination of its associated risk factors, and evaluation of cofactors that accelerate its progression, underscore the difficulties in global prevention and control of HCV. Because there is no vaccine and no post-exposure prophylaxis for HCV, the focus of primary prevention efforts should be safer blood supply in the developing world, safe injection practices in health care and other settings, and decreasing the number of people who initiate injection drug use.", "title": "" }, { "docid": "11cf39e49d5365f78ff849c8800fe724", "text": "Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. While the probabilities for class labels naturally reflect classification confidence, localization confidence is absent. This makes properly localized bounding boxes degenerate during iterative regression or even suppressed during NMS. In the paper we propose IoU-Net learning to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes. Furthermore, an optimization-based bounding box refinement method is proposed, where the predicted IoU is formulated as the objective. Extensive experiments on the MS-COCO dataset show the effectiveness of IoU-Net, as well as its compatibility with and adaptivity to several state-of-the-art object detectors.", "title": "" }, { "docid": "c55cf6c871a681cad112cb9c664a1928", "text": "Splitting of the behavioural activity phase has been found in nocturnal rodents with suprachiasmatic nucleus (SCN) coupling disorder. A similar phenomenon was observed in the sleep phase in the diurnal human discussed here, suggesting that there are so-called evening and morning oscillators in the SCN of humans. The present case suffered from bipolar disorder refractory to various treatments, and various circadian rhythm sleep disorders, such as delayed sleep phase, polyphasic sleep, separation of the sleep bout resembling splitting and circabidian rhythm (48 h), were found during prolonged depressive episodes with hypersomnia. Separation of sleep into evening and morning components and delayed sleep-offset (24.69-h cycle) developed when lowering and stopping the dose of aripiprazole (APZ). However, resumption of APZ improved these symptoms in 2 weeks, accompanied by improvement in the patient's depressive state. Administration of APZ may improve various circadian rhythm sleep disorders, as well as improve and prevent manic-depressive episodes, via augmentation of coupling in the SCN network.", "title": "" }, { "docid": "f829820706687c186e998bfed5be9c42", "text": "As deep learning systems are widely adopted in safetyand securitycritical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors – ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for realworld applications, where faults can be introduced by simpler means (such as altering the supply voltage).", "title": "" }, { "docid": "56b42c551ad57c82ad15e6fc2e98f528", "text": "Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. 1 The Optimal Reward Problem In this work, we consider the scenario of an agent designer building an autonomous agent. The designer has his or her own goals which must be translated into goals for the autonomous agent. We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. Typically, the designer assigns his or her own reward to the agent. However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). PGRD has few parameters, improves the reward", "title": "" }, { "docid": "6e02cdb0ade3479e0df03c30d9d69fa3", "text": "Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data.", "title": "" }, { "docid": "820fd65b9bef1b7c81c1f23a63dbea57", "text": "In this study, we analysed morphological, anatomical and physiological effects of polyploidisation in Spathiphyllum wallisii in order to evaluate possible interesting advantages of polyploids for ornamental breeding. Stomatal density was negatively correlated with increased ploidy level. Stomatal size increased in polyploids. Tetraploid Spathiphyllum plants had more ovate and thicker leaves. The inflorescence of tetraploids had a more ovate and thicker spathum, a more cylindrical spadix and a thicker but shorter flower stalk. Biomass production of the tetraploids was reduced, as expressed by lower total dry weights, and tetraploids produced fewer shoots and leaves compared with their diploid progenitors. Furthermore, tetraploid Spathiphyllum plants were more resistant to drought stress compared with diploid plants. After 15 days of drought stress, diploids showed symptoms of wilting, while the tetraploids showed almost no symptoms. Further, measurements of stomatal resistance, leaf water potential, relative water content and proline content indicated that the tetraploid genotypes were more resistant to drought stress compared with the diploids.", "title": "" }, { "docid": "c04f67fd5cc7f2f95452046bb18c6cfa", "text": "Bob is a free signal processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, Switzerland. The toolbox is designed to meet the needs of researchers by reducing development time and efficiently processing data. Firstly, Bob provides a researcher-friendly Python environment for rapid development. Secondly, efficient processing of large amounts of multimedia data is provided by fast C++ implementations of identified bottlenecks. The Python environment is integrated seamlessly with the C++ library, which ensures the library is easy to use and extensible. Thirdly, Bob supports reproducible research through its integrated experimental protocols for several databases. Finally, a strong emphasis is placed on code clarity, documentation, and thorough unit testing. Bob is thus an attractive resource for researchers due to this unique combination of ease of use, efficiency, extensibility and transparency. Bob is an open-source library and an ongoing community effort.", "title": "" }, { "docid": "7e08a713a97f153cdd3a7728b7e0a37c", "text": "The availability of long circulating, multifunctional polymers is critical to the development of drug delivery systems and bioconjugates. The ease of synthesis and functionalization make linear polymers attractive but their rapid clearance from circulation compared to their branched or cyclic counterparts, and their high solution viscosities restrict their applications in certain settings. Herein, we report the unusual compact nature of high molecular weight (HMW) linear polyglycerols (LPGs) (LPG - 100; M(n) - 104 kg mol(-1), M(w)/M(n) - 1.15) in aqueous solutions and its impact on its solution properties, blood compatibility, cell compatibility, in vivo circulation, biodistribution and renal clearance. The properties of LPG have been compared with hyperbranched polyglycerol (HPG) (HPG-100), linear polyethylene glycol (PEG) with similar MWs. The hydrodynamic size and the intrinsic viscosity of LPG-100 in water were considerably lower compared to PEG. The Mark-Houwink parameter of LPG was almost 10-fold lower than that of PEG. LPG and HPG demonstrated excellent blood and cell compatibilities. Unlike LPG and HPG, HMW PEG showed dose dependent activation of blood coagulation, platelets and complement system, severe red blood cell aggregation and hemolysis, and cell toxicity. The long blood circulation of LPG-100 (t(1/2β,) 31.8 ± 4 h) was demonstrated in mice; however, it was shorter compared to HPG-100 (t(1/2β,) 39.2 ± 8 h). The shorter circulation half life of LPG-100 was correlated with its higher renal clearance and deformability. Relatively lower organ accumulation was observed for LPG-100 and HPG-100 with some influence of on the architecture of the polymers. Since LPG showed better biocompatibility profiles, longer in vivo circulation time compared to PEG and other linear drug carrier polymers, and has multiple functionalities for conjugation, makes it a potential candidate for developing long circulating multifunctional drug delivery systems similar to HPG.", "title": "" }, { "docid": "50c3e7855f8a654571a62a094a86c4eb", "text": "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.", "title": "" }, { "docid": "0297b1f3565e4d1a3554137ac4719cfd", "text": "Systems to automatically provide a representative summary or `Key Phrase' of a piece of music are described. For a `rock' song with `verse' and `chorus' sections, we aim to return the chorus or in any case the most repeated and hence most memorable section. The techniques are less applicable to music with more complicated structure although possibly our general framework could still be used with di erent heuristics. Our process consists of three steps. First we parameterize the song into features. Next we use these features to discover the song structure, either by clustering xed-length segments or by training a hidden Markov model (HMM) for the song. Finally, given this structure, we use heuristics to choose the Key Phrase. Results for summaries of 18 Beatles songs evaluated by ten users show that the technique based on clustering is superior to the HMM approach and to choosing the Key Phrase at random.", "title": "" }, { "docid": "d004de75764e87fe246617cb7e3259a6", "text": "OBJECTIVE\nClinical decision-making regarding the prevention of depression is complex for pregnant women with histories of depression and their health care providers. Pregnant women with histories of depression report preference for nonpharmacological care, but few evidence-based options exist. Mindfulness-based cognitive therapy has strong evidence in the prevention of depressive relapse/recurrence among general populations and indications of promise as adapted for perinatal depression (MBCT-PD). With a pilot randomized clinical trial, our aim was to evaluate treatment acceptability and efficacy of MBCT-PD relative to treatment as usual (TAU).\n\n\nMETHOD\nPregnant adult women with depression histories were recruited from obstetric clinics at 2 sites and randomized to MBCT-PD (N = 43) or TAU (N = 43). Treatment acceptability was measured by assessing completion of sessions, at-home practice, and satisfaction. Clinical outcomes were interview-based depression relapse/recurrence status and self-reported depressive symptoms through 6 months postpartum.\n\n\nRESULTS\nConsistent with predictions, MBCT-PD for at-risk pregnant women was acceptable based on rates of completion of sessions and at-home practice assignments, and satisfaction with services was significantly higher for MBCT-PD than TAU. Moreover, at-risk women randomly assigned to MBCT-PD reported significantly improved depressive outcomes compared with participants receiving TAU, including significantly lower rates of depressive relapse/recurrence and lower depressive symptom severity during the course of the study.\n\n\nCONCLUSIONS\nMBCT-PD is an acceptable and clinically beneficial program for pregnant women with histories of depression; teaching the skills and practices of mindfulness meditation and cognitive-behavioral therapy during pregnancy may help to reduce the risk of depression during an important transition in many women's lives.", "title": "" }, { "docid": "eccfd842c24cf6b87c2b311dc8b29dd3", "text": "Sparse coding exhibits good performance in many computer vision applications. However, due to the overcomplete codebook and the independent coding process, the locality and the similarity among the instances to be encoded are lost. To preserve such locality and similarity information, we propose a Laplacian sparse coding (LSc) framework. By incorporating the similarity preserving term into the objective of sparse coding, our proposed Laplacian sparse coding can alleviate the instability of sparse codes. Furthermore, we propose a Hypergraph Laplacian sparse coding (HLSc), which extends our Laplacian sparse coding to the case where the similarity among the instances defined by a hypergraph. Specifically, this HLSc captures the similarity among the instances within the same hyperedge simultaneously, and also makes the sparse codes of them be similar to each other. Both Laplacian sparse coding and Hypergraph Laplacian sparse coding enhance the robustness of sparse coding. We apply the Laplacian sparse coding to feature quantization in Bag-of-Words image representation, and it outperforms sparse coding and achieves good performance in solving the image classification problem. The Hypergraph Laplacian sparse coding is also successfully used to solve the semi-auto image tagging problem. The good performance of these applications demonstrates the effectiveness of our proposed formulations in locality and similarity preservation.", "title": "" }, { "docid": "f04d2a3bc0c3428823060ad59ff2222a", "text": "This paper proposes techniques to fabricate synthetic gecko foot-hairs as dry adhesives for future wallclimbing and surgical robots, and models for understanding the synthetic hair design issues. Two nanomolding fabrication techniques are proposed: the first method uses nanoprobe indented flat wax surface and the second one uses a nano-pore membrane as a template. These templates are molded with silicone rubber, polyimide, etc. type of polymers under vacuum. Next, design parameters such as length, diameter, stiffness, density, and orientation of hairs are determined for non-matting and rough surface adaptability. Preliminary micro/nano-hair prototypes showed adhesion close to the predicted values for natural specimens (around 100 nN each).", "title": "" }, { "docid": "526ac4f1148cc479556b8c1d4ddb0d26", "text": "Rating prediction is a key task of e-commerce recommendation mechanisms. Recent studies in social recommendation enhance the performance of rating predictors by taking advantage of user relationships. However, these prediction approaches mostly rely on user personal information which is a privacy threat. In this paper, we present dTrust, a simple social recommendation approach that avoids using user personal information. It relies uniquely on the topology of an anonymized trust-user-item network that combines user trust relations with user rating scores. This topology is fed into a deep feed-forward neural network. Experiments on real-world data sets showed that dTrust outperforms state-of-the-art in terms of Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) scores for both warm-start and cold-start problems.", "title": "" }, { "docid": "95513348196c70bb6242137685a6fbe5", "text": "People speak at different levels of specificity in different situations.1 A conversational agent should have this ability and know when to be specific and when to be general. We propose an approach that gives a neural network–based conversational agent this ability. Our approach involves alternating between data distillation and model training : removing training examples that are closest to the responses most commonly produced by the model trained from the last round and then retrain the model on the remaining dataset. Dialogue generation models trained with different degrees of data distillation manifest different levels of specificity. We then train a reinforcement learning system for selecting among this pool of generation models, to choose the best level of specificity for a given input. Compared to the original generative model trained without distillation, the proposed system is capable of generating more interesting and higher-quality responses, in addition to appropriately adjusting specificity depending on the context. Our research constitutes a specific case of a broader approach involving training multiple subsystems from a single dataset distinguished by differences in a specific property one wishes to model. We show that from such a set of subsystems, one can use reinforcement learning to build a system that tailors its output to different input contexts at test time. Depending on their knowledge, interlocutors, mood, etc.", "title": "" }, { "docid": "fbcd6379081030e22d7a84438c48f570", "text": "We present a method to compute an initial alignment for pairwise registration of point clouds. This method uses the properties of a rigid body transformation - the ratio of lengths is preserved, the euclidean distance between points is preserved - to find congruent pyramids in two point clouds. The corresponding vertices of the congruent pyramids are used to derive a closed form solution for initial alignment. The alignment is refined further using the Iterative Closest Point algorithm. We validate the method on challenging datasets - which include airborne LIDAR, outdoor, and indoor - having initial offsets and varying densities.", "title": "" }, { "docid": "65250c2c208e410ae5c01110d77f64c9", "text": "The Mad package described here facilitates the evaluation of first derivatives of multidimensional functions that are defined by computer codes written in MATLAB. The underlying algorithm is the well-known forward mode of automatic differentiation implemented via operator overloading on variables of the class fmad. The main distinguishing feature of this MATLAB implementation is the separation of the linear combination of derivative vectors into a separate derivative vector class derivvec. This allows for the straightforward performance optimization of the overall package. Additionally, by internally using a matrix (two-dimensional) representation of arbitrary dimension directional derivatives, we may utilize MATLAB's sparse matrix class to propagate sparse directional derivatives for MATLAB code which uses arbitrary dimension arrays. On several examples, the package is shown to be more efficient than Verma's ADMAT package [Verma 1998a].", "title": "" }, { "docid": "62db862b080e12decd61b09878e4b893", "text": "OBJECTIVE\nThe purpose of this study was to estimate the incidence of postpartum hemorrhage (PPH) in the United States and to assess trends.\n\n\nSTUDY DESIGN\nPopulation-based data from the 1994-2006 National Inpatient Sample were used to identify women who were hospitalized with postpartum hemorrhage. Data for each year were plotted, and trends were assessed. Multivariable logistic regression was used in an attempt to explain the difference in PPH incidence between 1994 and 2006.\n\n\nRESULTS\nPPH increased 26% between 1994 and 2006 from 2.3% (n = 85,954) to 2.9% (n = 124,708; P < .001). The increase primarily was due to an increase in uterine atony, from 1.6% (n = 58,597) to 2.4% (n = 99,904; P < .001). The increase in PPH could not be explained by changes in rates of cesarean delivery, vaginal birth after cesarean delivery, maternal age, multiple birth, hypertension, or diabetes mellitus.\n\n\nCONCLUSION\nPopulation-based surveillance data signal an apparent increase in PPH caused by uterine atony. More nuanced clinical data are needed to understand the factors that are associated with this trend.", "title": "" }, { "docid": "a4f960905077291bd6da9359fd803a9c", "text": "In this paper, we propose a new framework named Data Augmentation for Domain-Invariant Learning (DADIL). In the field of manufacturing, labeling sensor data as normal or abnormal is helpful for improving productivity and avoiding problems. In practice, however, the status of equipment may change due to changes in maintenance and settings (referred to as a “domain change”), which makes it difficult to collect sufficient homogeneous data. Therefore, it is important to develop a discriminative model that can use a limited number of data samples. Moreover, real data might contain noise that could have a negative impact. We focus on the following aspect: The difficulties of a domain change are also due to the limited data. Although the number of data samples in each domain is low, we make use of data augmentation which is a promising way to mitigate the influence of noise and enhance the performance of discriminative models. In our data augmentation method, we generate “pseudo data” by combining the data for each label regardless of the domain and extract a domain-invariant representation for classification. We experimentally show that this representation is effective for obtaining the label precisely using real datasets.", "title": "" } ]
scidocsrr
d44020eae33592ac5e8e3c0b2f52c9d2
ON-THE-FLY NETWORK PRUNING
[ { "docid": "35625f248c81ebb5c20151147483f3f6", "text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.", "title": "" }, { "docid": "bf1f9f28d7077909851c41eaed31e0db", "text": "Often the best performing supervised learning models are ensembles of hundreds or thousands of base-level classifiers. Unfortunately, the space required to store this many classifiers, and the time required to execute them at run-time, prohibits their use in applications where test sets are large (e.g. Google), where storage space is at a premium (e.g. PDAs), and where computational power is limited (e.g. hea-ring aids). We present a method for \"compressing\" large, complex ensembles into smaller, faster models, usually without significant loss in performance.", "title": "" }, { "docid": "db433a01dd2a2fd80580ffac05601f70", "text": "While depth tends to improve network performances, it also m akes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed a t obtaining small and fast-to-execute models, and it has shown that a student netw ork could imitate the soft output of a larger teacher network or ensemble of networ ks. In this paper, we extend this idea to allow the training of a student that is d eeper and thinner than the teacher, using not only the outputs but also the inte rmediate representations learned by the teacher as hints to improve the traini ng process and final performance of the student. Because the student intermedia te hidden layer will generally be smaller than the teacher’s intermediate hidde n layer, additional parameters are introduced to map the student hidden layer to th e prediction of the teacher hidden layer. This allows one to train deeper studen s that can generalize better or run faster, a trade-off that is controlled by the ch osen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teache r network.", "title": "" } ]
[ { "docid": "fdf0f219f1251aed73732f5a07b65ccf", "text": "This study evaluates the use of a hierarchical classification approach to automated assessment of essays. Automated essay scoring (AES) generally relies on machine learning techniques that compute essay scores using a set of text variables. Unlike previous studies that rely on regression models, this study computes essay scores using a hierarchical approach, analogous to an incremental algorithm for hierarchical classification. The corpus in this study consists of 1243 argumentative (persuasive) essays written on 14 different prompts, across 3 different grade levels (9th grade, 11th grade, college freshman), and four different time limits for writing or temporal conditions (untimed essays and essays written in 10, 15, and 25 minute increments). The features included in the analysis are computed using the automated tools, Coh-Metrix, the Writing Assessment Tool (WAT), and Linguistic Inquiry and Word Count (LIWC). Overall, the models developed to score all the essays in the data set report 55% exact accuracy and 92% adjacent accuracy between the predicted essay scores and the human scores. The results indicate that this is a promising approach to AES that could provide more specific feedback to writers and may be relevant to other natural language computations, such as the scoring of short answers in comprehension or knowledge assessments. © 2014 Elsevier Ltd. All rights reserved. ∗ Corresponding author at: P.O. Box 872111, Tempe, AZ 85287-2111, United States. Tel.: +1 480 727 5690. E-mail address: dsmcnamara1@gmail.com (D.S. McNamara). http://dx.doi.org/10.1016/j.asw.2014.09.002 1075-2935/© 2014 Elsevier Ltd. All rights reserved. 36 D.S. McNamara et al. / Assessing Writing 23 (2015) 35–59", "title": "" }, { "docid": "d9ece650bec596b2a817a91cd26600bf", "text": "A new developmental model of borderline personality disorder (BPD) and its treatment is advanced based on evolutionary considerations concerning the role of attachment, mentalizing, and epistemic trust in the development of psychopathology. We propose that vulnerability to psychopathology in general is related to impairments in epistemic trust, leading to disruptions in the process of salutogenesis, the positive effects associated with the capacity to benefit from the social environment. BPD is perhaps the disorder par excellence that illustrates this view. We argue that this conceptualization makes sense of the presence of both marked rigidity and instability in BPD, and has far-reaching implications for intervention.", "title": "" }, { "docid": "102bec350390b46415ae07128cb4e77f", "text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "title": "" }, { "docid": "7437f0c8549cb8f73f352f8043a80d19", "text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.", "title": "" }, { "docid": "a4a4c69fb5753910329e857bd51321ec", "text": "We introduce the Piecewise-Constant Conditional Intensity Model, a model for learning temporal dependencies in event streams. We describe a closed-form Bayesian approach to learning these models, and describe an importance sampling algorithm for forecasting future events using these models, using a proposal distribution based on Poisson superposition. We then use synthetic data, supercomputer event logs, and web search query logs to illustrate that our learning algorithm can efficiently learn nonlinear temporal dependencies, and that our importance sampling algorithm can effectively forecast future events.", "title": "" }, { "docid": "e49dcbcb0bb8963d4f724513d66dd3a0", "text": "To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents’ policies during training, failing to sufficiently generalize during execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker.", "title": "" }, { "docid": "5f8ddaa9130446373a9a5d44c17ca604", "text": "Object detection is a crucial task for autonomous driving. In addition to requiring high accuracy to ensure safety, object detection for autonomous driving also requires realtime inference speed to guarantee prompt vehicle control, as well as small model size and energy efficiency to enable embedded system deployment.,,,,,, In this work, we propose SqueezeDet, a fully convolutional neural network for object detection that aims to simultaneously satisfy all of the above constraints. In our network we use convolutional layers not only to extract feature maps, but also as the output layer to compute bounding boxes and class probabilities. The detection pipeline of our model only contains a single forward pass of a neural network, thus it is extremely fast. Our model is fullyconvolutional, which leads to small model size and better energy efficiency. Finally, our experiments show that our model is very accurate, achieving state-of-the-art accuracy on the KITTI [10] benchmark. The source code of SqueezeDet is open-source released.", "title": "" }, { "docid": "e32b7124ab91993c9580d70e9d599e0a", "text": "Visual sensor networks (VSNs) are receiving a lot of attention in research, and at the same time, commercial applications are starting to emerge. VSN devices come with image sensors, adequate processing power, and memory. They use wireless communication interfaces to collaborate and jointly solve tasks such as tracking persons within the network. VSNs are expected to replace not only many traditional, closed-circuit surveillance systems but also to enable emerging applications in scenarios such as elderly care, home monitoring, or entertainment. In all of these applications, VSNs monitor a potentially large group of people and record sensitive image data that might contain identities of persons, their behavior, interaction patterns, or personal preferences. These intimate details can be easily abused, for example, to derive personal profiles.\n The highly sensitive nature of images makes security and privacy in VSNs even more important than in most other sensor and data networks. However, the direct use of security techniques developed for related domains might be misleading due to the different requirements and design challenges. This is especially true for aspects such as data confidentiality and privacy protection against insiders, generating awareness among monitored people, and giving trustworthy feedback about recorded personal data—all of these aspects go beyond what is typically required in other applications.\n In this survey, we present an overview of the characteristics of VSN applications, the involved security threats and attack scenarios, and the major security challenges. A central contribution of this survey is our classification of VSN security aspects into data-centric, node-centric, network-centric, and user-centric security. We identify and discuss the individual security requirements and present a profound overview of related work for each class. We then discuss privacy protection techniques and identify recent trends in VSN security and privacy. A discussion of open research issues concludes this survey.", "title": "" }, { "docid": "a66b5b6dea68e5460b227af4caa14ef3", "text": "This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.", "title": "" }, { "docid": "5539885c88d11eb6a9c4e54b6e399863", "text": "Deep neural network is difficult to train and this predicament becomes worse as the depth increases. The essence of this problem exists in the magnitude of backpropagated errors that will result in gradient vanishing or exploding phenomenon. We show that a variant of regularizer which utilizes orthonormality among different filter banks can alleviate this problem. Moreover, we design a backward error modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. Equipped with these two ingredients, we propose several novel optimization solutions that can be utilized for training a specific-structured (repetitively triple modules of Conv-BNReLU) extremely deep convolutional neural network (CNN) WITHOUT any shortcuts/ identity mappings from scratch. Experiments show that our proposed solutions can achieve distinct improvements for a 44-layer and a 110-layer plain networks on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train plain CNNs to match the performance of the residual counterparts. Besides, we propose new principles for designing network structure from the insights evoked by orthonormality. Combined with residual structure, we achieve comparative performance on the ImageNet dataset.", "title": "" }, { "docid": "2a61f00e55f1a3b98bd0434953fefb7f", "text": "The object of this study was to examine changes in muscular strength, power, and resting hormonal concentrations during 6 weeks of detraining (DTR) in recreationally strength-trained men. Each subject was randomly assigned to either a DTR (n = 9) or resistance training (RT; n = 7) group after being matched for strength, body size, and training experience. Muscular strength and power testing, anthropometry, and blood sampling were performed before the experimental period (T1), after 3 weeks (T2), and after the 6-week experimental period (T3). One-repetition maximum (1RM) shoulder and bench press increased in RT at T3 (p </= 0.05), whereas no significant changes were observed in DTR. Peak power output and mean power output significantly decreased (9 and 10%) in DTR at T2. Peak torque of the elbow flexors at 90 degrees did not change in the RT group but did significantly decrease by 11.9% at T3 compared with T1 in the DTR group. Vertical jump height increased in RT at T2 but did not change in DTR. Neither group displayed any changes in 1RM squat, body mass, percent body fat, or resting concentrations of growth hormone, follicle-stimulating hormone, luteinizing hormone, sex hormone-binding globulin, testosterone, cortisol, or adrenocorticotropin. These data demonstrate that 6 weeks of resistance DTR in recreationally trained men affects power more than it does strength without any accompanying changes in resting hormonal concentrations. For the recreational weight trainer, losses in strength over 6 weeks are less of a concern compared with anaerobic power and upper arm isometric force production. Anaerobic power exercise with a high metabolic component coming from glycolysis might be of importance for reducing the impact of DTR on Wingate power performances. A minimal maintenance training program is recommended for the recreational lifter to offset any reductions in performance.", "title": "" }, { "docid": "8106e11ecb11ffc131a36917a60dce33", "text": "Augmented Reality, Architecture and Ubiquity: Technologies, Theories and Frontiers", "title": "" }, { "docid": "f47841bc67e842102dc72dc8d39d8262", "text": "Eye gaze estimation systems calculate the direction of human eye gaze. Numerous accurate eye gaze estimation systems considering a user s head movement have been reported. Although the systems allow large head motion, they require multiple devices and complicate computation in order to obtain the geometrical positions of an eye, cameras, and a monitor. The light-reflection-based method proposed in this paper does not require any knowledge of their positions, so the system utilizing the proposed method is lighter and easier to use than the conventional systems. To estimate where the user looks allowing ample head movement, we utilize an invariant value (cross-ratio) of a projective space. Also, a robust feature detection using an ellipse-specific active contour is suggested in order to find features exactly. Our proposed feature detection and estimation method are simple and fast, and shows accurate results under large head motion. 2004 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "cd735ea51ec77944f0ab26a5cc0e6105", "text": "Advances in smartphone technology have promoted the rapid development of mobile apps. However, the availability of a huge number of mobile apps in application stores has imposed the challenge of finding the right apps to meet the user needs. Indeed, there is a critical demand for personalized app recommendations. Along this line, there are opportunities and challenges posed by two unique characteristics of mobile apps. First, app markets have organized apps in a hierarchical taxonomy. Second, apps with similar functionalities are competing with each other. Although there are a variety of approaches for mobile app recommendations, these approaches do not have a focus on dealing with these opportunities and challenges. To this end, in this article, we provide a systematic study for addressing these challenges. Specifically, we develop a structural user choice model (SUCM) to learn fine-grained user preferences by exploiting the hierarchical taxonomy of apps as well as the competitive relationships among apps. Moreover, we design an efficient learning algorithm to estimate the parameters for the SUCM model. Finally, we perform extensive experiments on a large app adoption dataset collected from Google Play. The results show that SUCM consistently outperforms state-of-the-art Top-N recommendation methods by a significant margin.", "title": "" }, { "docid": "7368d5b91a5d8e8f839a826de6e2265e", "text": "BACKGROUND\nAssessing pruritus severity is difficult because of its subjective nature. A questionnaire that takes into account how the symptom is perceived by the patient may provide a more accurate representation of the pruritus. However, recently developed questionnaires do not specifically quantify severity of the symptom.\n\n\nOBJECTIVES\nTo develop a self-report questionnaire to measure pruritus severity and to provide initial evidence of its validity and reliability.\n\n\nMETHODS\nWe modified a previously developed interview for the characterization and evaluation of pruritus, which was completed along with the RAND-36 Health Status Inventory and Dermatology Life Quality Index by patients with psoriasis-associated pruritus. Exploratory factor analysis, studies of internal consistency, and correlation analyses with health-related quality of life scores were used to help determine which components of the modified pruritus interview to include in the new questionnaire, the Itch Severity Scale (ISS). The ISS was then assessed for construct validity, internal consistency reliability and test-retest reliability.\n\n\nRESULTS\nSeven of the initial 11 components of the modified pruritus interview were included in the ISS. ISS scores correlated moderately with physical (r=-0 x 483) and mental (r=-0 x 492) health composite scores of the RAND-36 and strongly with Dermatology Life Quality Index scores (r=0 x 628), evidence of construct validity. It had an internal consistency reliability of 0 x 80 and a test-retest reliability of 0 x 95.\n\n\nCONCLUSIONS\nBased on this preliminary evidence of validity and reliability, this new seven-item ISS may be useful in comparing pruritus severity among different disease populations or in assessing pruritus treatment effectiveness.", "title": "" }, { "docid": "c8245d1c57ce52020743043d88be0942", "text": "P2P streaming applications are very popular on the Internet today. However, a mobile device in P2P streaming not only needs to continuously receive streaming data from other peers for its playback, but also needs to continuously exchange control information (e.g., buffermaps and file chunk requests) with neighboring peers and upload the downloaded streaming data to them. These lead to excessive battery power consumption on the mobile device.\n In this paper, we first conduct Internet experiments to study in-depth the impact of control traffic and uploading traffic on battery power consumption with several popular Internet P2P streaming applications. Motivated by measurement results, we design and implement a system called BlueStreaming that effectively utilizes the commonly existing Bluetooth interface on mobile devices. Instead of activating WiFi and Bluetooth interfaces alternatively, BlueStreaming keeps Bluetooth active all the time to transmit delay-sensitive control traffic while using WiFi for streaming data traffic. BlueStreaming trades Bluetooth's power consumption for much more significant energy saving from shaped WiFi traffic. To evaluate the performance of BlueStreaming, we have implemented prototypes on both Windows and Mac to access existing popular Internet P2P streaming services. The experimental results show that BlueStreaming can save up to 46% battery power compared to the commodity PSM scheme.", "title": "" }, { "docid": "797166b4c68bcdc7a8860462117e2051", "text": "In this paper we propose a novel feature descriptor Extended Co-occurrence HOG (ECoHOG) and integrate it with dense point trajectories demonstrating its usefulness in fine grained activity recognition. This feature is inspired by original Co-occurrence HOG (CoHOG) that is based on histograms of occurrences of pairs of image gradients in the image. Instead relying only on pure histograms we introduce a sum of gradient magnitudes of co-occurring pairs of image gradients in the image. This results in giving the importance to the object boundaries and straightening the difference between the moving foreground and static background. We also couple ECoHOG with dense point trajectories extracted using optical flow from video sequences and demonstrate that they are extremely well suited for fine grained activity recognition. Using our feature we outperform state of the art methods in this task and provide extensive quantitative evaluation.", "title": "" }, { "docid": "82fa51c143159f2b85f9d2e5b610e30d", "text": "Strategies are systematic and long-term approaches to problems. Federal, state, and local governments are investing in the development of strategies to further their e-government goals. These strategies are based on their knowledge of the field and the relevant resources available to them. Governments are communicating these strategies to practitioners through the use of practical guides. The guides provide direction to practitioners as they consider, make a case for, and implement IT initiatives. This article presents an analysis of a selected set of resources government practitioners use to guide their e-government efforts. A selected review of current literature on the challenges to information technology initiatives is used to create a framework for the analysis. A gap analysis examines the extent to which IT-related research is reflected in the practical guides. The resulting analysis is used to identify a set of commonalities across the practical guides and a set of recommendations for future development of practitioner guides and future research into e-government initiatives. D 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "c9aa8454246e983e9aa2752bfa667f43", "text": "BACKGROUND\nADHD is diagnosed and treated more often in males than in females. Research on gender differences suggests that girls may be consistently underidentified and underdiagnosed because of differences in the expression of the disorder among boys and girls. One aim of the present study was to assess in a clinical sample of medication naïve boys and girls with ADHD, whether there were significant gender x diagnosis interactions in co-existing symptom severity and executive function (EF) impairment. The second aim was to delineate specific symptom ratings and measures of EF that were most important in distinguishing ADHD from healthy controls (HC) of the same gender.\n\n\nMETHODS\nThirty-seven females with ADHD, 43 males with ADHD, 18 HC females and 32 HC males between 8 and 17 years were included. Co-existing symptoms were assessed with self-report scales and parent ratings. EF was assessed with parent ratings of executive skills in everyday situations (BRIEF), and neuropsychological tests. The three measurement domains (co-existing symptoms, BRIEF, neuropsychological EF tests) were investigated using analysis of variance (ANOVA) and random forest classification.\n\n\nRESULTS\nANOVAs revealed only one significant diagnosis x gender interaction, with higher rates of self-reported anxiety symptoms in females with ADHD. Random forest classification indicated that co-existing symptom ratings was substantially better in distinguishing subjects with ADHD from HC in females (93% accuracy) than in males (86% accuracy). The most important distinguishing variable was self-reported anxiety in females, and parent ratings of rule breaking in males. Parent ratings of EF skills were better in distinguishing subjects with ADHD from HC in males (96% accuracy) than in females (92% accuracy). Neuropsychological EF tests had only a modest ability to categorize subjects as ADHD or HC in males (73% accuracy) and females (79% accuracy).\n\n\nCONCLUSIONS\nOur findings emphasize the combination of self-report and parent rating scales for the identification of different comorbid symptom expression in boys and girls already diagnosed with ADHD. Self-report scales may increase awareness of internalizing problems particularly salient in females with ADHD.", "title": "" }, { "docid": "28b3d7fbcb20f5548d22dbf71b882a05", "text": "In this paper, we propose a novel abnormal event detection method with spatio-temporal adversarial networks (STAN). We devise a spatio-temporal generator which synthesizes an inter- frame by considering spatio-temporal characteristics with bidirectional ConvLSTM. A proposed spatio-temporal discriminator determines whether an input sequence is real-normal or not with 3D convolutional layers. These two networks are trained in an adversarial way to effectively encode spatio-temporal features of normal patterns. After the learning, the generator and the discriminator can be independently used as detectors, and deviations from the learned normal patterns are detected as abnormalities. Experimental results show that the proposed method achieved competitive performance compared to the state-of-the-art methods. Further, for the interpretation, we visualize the location of abnormal events detected by the proposed networks using a generator loss and discriminator gradients.", "title": "" } ]
scidocsrr
ec004e9d42d66e9e6a5a723955f836bf
Microcontroller Based Heart Rate Monitor
[ { "docid": "5eccbb19af4a1b19551ce4c93c177c07", "text": "This paper presents the design and development of a microcontroller based heart rate monitor using fingertip sensor. The device uses the optical technology to detect the flow of blood through the finger and offers the advantage of portability over tape-based recording systems. The important feature of this research is the use of Discrete Fourier Transforms to analyse the ECG signal in order to measure the heart rate. Evaluation of the device on real signals shows accuracy in heart rate estimation, even under intense physical activity. The performance of HRM device was compared with ECG signal represented on an oscilloscope and manual pulse measurement of heartbeat, giving excellent results. Our proposed Heart Rate Measuring (HRM) device is economical and user friendly.", "title": "" }, { "docid": "0a3cac4df8679fcc9b53a32b3dcaa695", "text": "This paper describes the design of a simple, low-cost microcontroller based heart rate measuring device with LCD output. Heart rate of the subject is measured from the finger using optical sensors and the rate is then averaged and displayed on a text based LCD.", "title": "" } ]
[ { "docid": "1dfaf55ac797e4b463f1c14e11694dbc", "text": "A new design for a tail-sitting vertical takeoff and landing (VTOL) unmanned aerial vehicle (UAV) was proposed. A nonlinear mathematical model of the vehicle dynamics was constructed by combining simple estimation methods. The flight characteristics were revealed through a trim analysis and an optimized transitional flight path analysis by using the mathematical model. The trim analysis revealed the existence of a minimum path angle to avoid stall in lowspeed flights. Although the value of the angle was positive without flaps or slats, these high lift devices (HLDs) improved this value. In particular, the slats provided a substantial improvement in the value of the angle and enabled a descent rate of 2.0 m/sec. In the optimized transitional flight path analysis, a level outbound transition without HLDs was achieved although a trimmed level flight at low speeds as was shown in the trim analysis was not possible; this was because the outbound transition was an accelerative flight. On the contrary, without HLDs, the vehicle could not avoid climbing during inbound transitions to avoid stall. The slats provided a satisfactory improvement during the transition and made a level inbound transition possible. These results showed the necessity of leading-edge slats for the proposed tail-sitting VTOL UAV.", "title": "" }, { "docid": "604619dd5f23569eaff40eabc8e94f52", "text": "Understanding the causes and effects of species invasions is a priority in ecology and conservation biology. One of the crucial steps in evaluating the impact of invasive species is to map changes in their actual and potential distribution and relative abundance across a wide region over an appropriate time span. While direct and indirect remote sensing approaches have long been used to assess the invasion of plant species, the distribution of invasive animals is mainly based on indirect methods that rely on environmental proxies of conditions suitable for colonization by a particular species. The aim of this article is to review recent efforts in the predictive modelling of the spread of both plant and animal invasive species using remote sensing, and to stimulate debate on the potential use of remote sensing in biological invasion monitoring and forecasting. Specifically, the challenges and drawbacks of remote sensing techniques are discussed in relation to: i) developing species distribution models, and ii) studying life cycle changes and phenological variations. Finally, the paper addresses the open challenges and pitfalls of remote sensing for biological invasion studies including sensor characteristics, upscaling and downscaling in species distribution models, and uncertainty of results.", "title": "" }, { "docid": "d71e9063c8ac026f1592d8db4d927edc", "text": "With the advancement of power electronics, new materials and novel bearing technologies, there has been an active development of high speed machines in recent years. The simple rotor structure makes switched reluctance machines (SRM) candidates for high speed operation. This paper has presents the design of a low power, 50,000 RPM 6/4 SRM having a toroidally wound stator. Finite element analysis (FEA) shows an equivalence to conventionally wound SRMs in terms of torque capability. With the conventional asymmetric converter and classic angular control, this toroidal-winding SRM (TSRM) is able to produce 233.20 W mechanical power with an efficiency of 75% at the FEA stage. Considering the enhanced cooling capability as the winding is directly exposed to air, the toroidal-winding is a good option for high-speed SRM.", "title": "" }, { "docid": "f577f970f841d8dee34e524ba661e727", "text": "The rapid growth in the amount of user-generated content (UGCs) online necessitates for social media companies to automatically extract knowledge structures (concepts) from user-generated images (UGIs) and user-generated videos (UGVs) to provide diverse multimedia-related services. For instance, recommending preference-aware multimedia content, the understanding of semantics and sentics from UGCs, and automatically computing tag relevance for UGIs are benefited from knowledge structures extracted from multiple modalities. Since contextual information captured by modern devices in conjunction with a media item greatly helps in its understanding, we leverage both multimedia content and contextual information (eg., spatial and temporal metadata) to address above-mentioned social media problems in our doctoral research. We present our approaches, results, and works in progress on these problems.", "title": "" }, { "docid": "73267d80249aeaf866906b95979b43ef", "text": "UNLABELLED\nFjorback LO, Arendt M, Ørnbøl E, Fink P, Walach H. Mindfulness-Based Stress Reduction and Mindfulness-Based Cognitive Therapy - a systematic review of randomized controlled trials.\n\n\nOBJECTIVE\n  To systematically review the evidence for MBSR and MBCT.\n\n\nMETHOD\n  Systematic searches of Medline, PsycInfo and Embase were performed in October 2010. MBSR, MBCT and Mindfulness Meditation were key words. Only randomized controlled trials (RCT) using the standard MBSR/MBCT programme with a minimum of 33 participants were included.\n\n\nRESULTS\n  The search produced 72 articles, of which 21 were included. MBSR improved mental health in 11 studies compared to wait list control or treatment as usual (TAU) and was as efficacious as active control group in three studies. MBCT reduced the risk of depressive relapse in two studies compared to TAU and was equally efficacious to TAU or an active control group in two studies. Overall, studies showed medium effect sizes. Among other limitations are lack of active control group and long-term follow-up in several studies.\n\n\nCONCLUSION\n  Evidence supports that MBSR improves mental health and MBCT prevents depressive relapse. Future RCTs should apply optimal design including active treatment for comparison, properly trained instructors and at least one-year follow-up. Future research should primarily tackle the question of whether mindfulness itself is a decisive ingredient by controlling against other active control conditions or true treatments.", "title": "" }, { "docid": "0c5dbac11af955a8261a4f3b8b5fe908", "text": "We describe a calibration and rendering technique for a projector that can render rectangular images under keystoned position. The projector utilizes a rigidly attached camera to form a stereo pair. We describe a very easy to use technique for calibration of the projector-camera pair using only black planar surfaces. We present an efficient rendering method to pre-warp images so that they appear correctly on the screen, and show experimental results.", "title": "" }, { "docid": "88806c1d72569f0ca5eba300ad4c1c10", "text": "We recall some properties of cycloidal curves well known for being defined as roulettes of circles of fixed radius and present a generation using using variable circles as a generalisation of traditional roulettes and a different approach of these well known curves. We present two types of associated epiand hypo-cycloids with orthogonality properties and give a new point of view at classical examples. We describe couples of associated cycloidals that can rotate inside a couple of cycloidal envelopes and stay constantly crossing at right angle. 1 Epicycles and Cycloidals : Astronomy has used epicycles for a long time to describe, before Kepler and Newton, the motion of planet in the ptolemaic system. This theory of epicyles was not really explanatory and imposed to use the composition of circle motion (epicycles) to complete the description of the trajectory. Rolling curves in the plane (or roulettes) used to study only curves assimilated to rigid objects. These roulettes, as the well known cycloid, were studied by many mathematicians of sixteenth century : Pascal, Huygens, Roemer, La Hire, Mc Laurin, and many others. They present individually a great number of geometric properties and collectively other fascinating particularities generated by the rolling of circles or by envelopes of moving circles in the plane. We can find on the web many pages about cycloidal curves with supernatural properties isolated or collectively with special motions that seems nearly impossible see examples on (9), (10), (11) web pages -. All are generated by only rotation and rolling of circles on other circles. The cycloidals generated, when algebraic so generated as roulettes by two circles (fixed and rolling) with radii in a rational ratio have wonderful geometrical specificities studied since a long", "title": "" }, { "docid": "5ddcfb5404ceaffd6957fc53b4b2c0d8", "text": "A router's main function is to allow communication between different networks as quickly as possible and in efficient manner. The communication can be between LAN or between LAN and WAN. A firewall's function is to restrict unwanted traffic. In big networks, routers and firewall tasks are performed by different network devices. But in small networks, we want both functions on same device i.e. one single device performing both routing and firewalling. We call these devices as routing firewall. In Traditional networks, the devices are already available. But the next generation networks will be powered by Software Defined Networks. For wide adoption of SDN, we need northbound SDN applications such as routers, load balancers, firewalls, proxy servers, Deep packet inspection devices, routing firewalls running on OpenFlow based physical and virtual switches. But the SDN is still in early stage, so still there is very less availability of these applications. There already exist simple L3 Learning application which provides very elementary router function and also simple stateful firewalls providing basic access control. In this paper, we are implementing one SDN Routing Firewall Application which will perform both the routing and firewall function.", "title": "" }, { "docid": "66cd10e39a91fb421d1145b2ebe7246c", "text": "Previous research suggests that heterosexual women's sexual arousal patterns are nonspecific; heterosexual women demonstrate genital arousal to both preferred and nonpreferred sexual stimuli. These patterns may, however, be related to the intense and impersonal nature of the audiovisual stimuli used. The current study investigated the gender specificity of heterosexual women's sexual arousal in response to less intense sexual stimuli, and also examined the role of relationship context on both women's and men's genital and subjective sexual responses. Assessments were made of 43 heterosexual women's and 9 heterosexual men's genital and subjective sexual arousal to audio narratives describing sexual or neutral encounters with female and male strangers, friends, or long-term relationship partners. Consistent with research employing audiovisual sexual stimuli, men demonstrated a category-specific pattern of genital and subjective arousal with respect to gender, while women showed a nonspecific pattern of genital arousal, yet reported a category-specific pattern of subjective arousal. Heterosexual women's nonspecific genital response to gender cues is not a function of stimulus intensity or relationship context. Relationship context did significantly affect women's genital sexual arousal--arousal to both female and male friends was significantly lower than to the stranger and long-term relationship contexts--but not men's. These results suggest that relationship context may be a more important factor in heterosexual women's physiological sexual response than gender cues.", "title": "" }, { "docid": "8c6637a398637301a5a396f87b73bcb4", "text": "The auditory system of monkeys includes a large number of interconnected subcortical nuclei and cortical areas. At subcortical levels, the structural components of the auditory system of monkeys resemble those of nonprimates, but the organization at cortical levels is different. In monkeys, the ventral nucleus of the medial geniculate complex projects in parallel to a core of three primary-like auditory areas, AI, R, and RT, constituting the first stage of cortical processing. These areas interconnect and project to the homotopic and other locations in the opposite cerebral hemisphere and to a surrounding array of eight proposed belt areas as a second stage of cortical processing. The belt areas in turn project in overlapping patterns to a lateral parabelt region with at least rostral and caudal subdivisions as a third stage of cortical processing. The divisions of the parabelt distribute to adjoining auditory and multimodal regions of the temporal lobe and to four functionally distinct regions of the frontal lobe. Histochemically, chimpanzees and humans have an auditory core that closely resembles that of monkeys. The challenge for future researchers is to understand how this complex system in monkeys analyzes and utilizes auditory information.", "title": "" }, { "docid": "6ec3f783ec49c0b3e51a704bc3bd03ec", "text": "Abstract: It has been suggested by many supply chain practitioners that in certain cases inventory can have a stimulating effect on the demand. In mathematical terms this amounts to the demand being a function of the inventory level alone. In this work we propose a logistic growth model for the inventory dependent demand rate and solve first the continuous time deterministic optimal control problem of maximising the present value of the total net profit over an infinite horizon. It is shown that under a strict condition there is a unique optimal stock level which the inventory planner should maintain in order to satisfy demand. The stochastic version of the optimal control problem is considered next. A bang-bang type of optimal control problem is formulated and the associated Hamilton-Jacobi-Bellman equation is solved. The inventory level that signifies a switch in the ordering strategy is worked out in the stochastic case.", "title": "" }, { "docid": "a28e63820d658c6e272000dacb7096fd", "text": "The feature-based graphical approach to robotic mapping provides a representationally rich and computationally efficient framework for an autonomous agent to learn a model of its environment. However, this formulation does not naturally support long-term autonomy because it lacks a notion of environmental change; in reality, “everything changes and nothing stands still, ” and any mapping and localization system that aims to support truly persistent autonomy must be similarly adaptive. To that end, in this paper we propose a novel feature-based model of environmental evolution over time. Our approach is based upon the development of an expressive probabilistic generative feature persistence model that describes the survival of abstract semi-static environmental features over time. We show that this model admits a recursive Bayesian estimator, the persistence filter, that provides an exact online method for computing, at each moment in time, an explicit Bayesian belief over the persistence of each feature in the environment. By incorporating this feature persistence estimation into current state-of-the-art graphical mapping techniques, we obtain a flexible, computationally efficient, and information-theoretically rigorous framework for lifelong environmental modeling in an ever-changing world.", "title": "" }, { "docid": "f48d02ff3661d3b91c68d6fcf750f83e", "text": "There have been a number of techniques developed in recent years for the efficient analysis of probabilistic inference problems, represented as Bayes' networks or influence diagrams (Lauritzen and Spiegelhalter [9], Pearl [12], Shachter [14]). To varying degrees these methods exploit the conditional independence assumed and revealed in the problem structure to analyze problems in polynomial time, essentially polynomial in the number of variables and the size of the largest state space encountered during the evaluation. Unfortunately, there are many problems of interest for which the variables of interest are continuous rather than discrete, so the relevant state spaces become infinite and the polynomial complexity is of little help.", "title": "" }, { "docid": "a2d38448513e69f514f88eb852e76292", "text": "It is cost-efficient for a tenant with a limited budget to establish a virtual MapReduce cluster by renting multiple virtual private servers (VPSs) from a VPS provider. To provide an appropriate scheduling scheme for this type of computing environment, we propose in this paper a hybrid job-driven scheduling scheme (JoSS for short) from a tenant's perspective. JoSS provides not only job-level scheduling, but also map-task level scheduling and reduce-task level scheduling. JoSS classifies MapReduce jobs based on job scale and job type and designs an appropriate scheduling policy to schedule each class of jobs. The goal is to improve data locality for both map tasks and reduce tasks, avoid job starvation, and improve job execution performance. Two variations of JoSS are further introduced to separately achieve a better map-data locality and a faster task assignment. We conduct extensive experiments to evaluate and compare the two variations with current scheduling algorithms supported by Hadoop. The results show that the two variations outperform the other tested algorithms in terms of map-data locality, reduce-data locality, and network overhead without incurring significant overhead. In addition, the two variations are separately suitable for different MapReduce-workload scenarios and provide the best job performance among all tested algorithms.", "title": "" }, { "docid": "2ab51bd16640532e17f19f9df3880a1a", "text": "monitor retail store shelves M. Marder S. Harary A. Ribak Y. Tzur S. Alpert A. Tzadok Using image analytics to monitor the contents and status of retail store shelves is an emerging trend with increasing business importance. Detecting and identifying multiple objects on store shelves involves a number of technical challenges. The particular nature of product package design, the arrangement of products on shelves, and the requirement to operate in unconstrained environments are just a few of the issues that must be addressed. We explain how we addressed these challenges in a system for monitoring planogram compliance, developed as part of a project with Tesco, a large multinational retailer. The new system offers store personnel an instant view of shelf status and a list of action items for restocking shelves. The core of the system is based on its ability to achieve high rates of product recognition, despite the very small visual differences between some products. This paper covers how state-of-the-art methods for object detection behave when applied to this problem. We also describe the innovative aspects of our implementation for size-scale-invariant product recognition and fine-grained classification.", "title": "" }, { "docid": "4d9adaac8dc69f902056d531f7570da7", "text": "A new CMOS buffer without short-circuit power consumption is proposed. The gatedriving signal of the output pull-up (pull-down) transistor is fed back to the output pull-down (pull-up) transistor to get tri-state output momentarily, eliminating the short-circuit power consumption. The HSPICE simulation results verified the operation of the proposed buffer and showed the power-delay product is about 15% smaller than conventional tapered CMOS buffer.", "title": "" }, { "docid": "9c8583dd46ef6ca49d7a9298377b755a", "text": "Traditional radio planning tools present a steep learning curve. We present BotRf, a Telegram Bot that facilitates the process by guiding non-experts in assessing the feasibility of radio links. Built on open source tools, BotRf can run on any smartphone or PC running Telegram. Using it on a smartphone has the added value that the Bot can leverage the internal GPS to enter coordinates. BotRf can be used in environments with low bandwidth as the generated data traffic is quite limited. We present examples of its use in Venezuela.", "title": "" }, { "docid": "5447d3fe8ed886a8792a3d8d504eaf44", "text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.", "title": "" }, { "docid": "7a4849a839b41e8c4c170f4b4b5a241b", "text": "A practical approach for generating motion paths with continuous steering for car-like mobile robots is presented here. This paper addresses two key issues in robot motion planning; path continuity and maximum curvature constraint for nonholonomic robots. The advantage of this new method is that it allows robots to account for their constraints in an efficient manner that facilitates real-time planning. Bspline curves are leveraged for their robustness and practical synthesis to model the vehicle’s path. Comparative navigational-based analyses are presented to selected appropriate curve and nominate its parameters. Path continuity is achieved by utilizing a single path, to represent the trajectory, with no limitations on path, or orientation. The path parameters are formulated with respect to the robot’s constraints. Maximum curvature is satisfied locally, in every segment using a smoothing algorithm, if needed. It is M. Elbanhawi ( ) · M. Simic · R. N. Jazar School of Aerospace, Mechanical, and Manufacturing Engineering (SAMME), RMIT University.Bundoora East Campus, Corner of Plenty Road, McKimmies Road, Bundoora VIC 3083, Melbourne, Australia e-mail: mohamed.elbenhawi@rmit.edu.au M. Simic e-mail: milan.simic@rmit.edu.au R. N. Jazar e-mail: reza.jazar@rmit.edu.au demonstrated that any local modifications of single sections have minimal effect on the entire path. Rigorous simulations are presented, to highlight the benefits of the proposed method, in comparison to existing approaches with regards to continuity, curvature control, path length and resulting acceleration. Experimental results validate that our approach mimics human steering with high accuracy. Accordingly, efficiently formulated continuous paths ultimately contribute towards passenger comfort improvement. Using presented approach, autonomous vehicles generate and follow paths that humans are accustomed to, with minimum disturbances.", "title": "" }, { "docid": "982d9a7e483bb254cbe3e6f90e7adcbd", "text": "Multiple transmitters can be used to simultaneously transmit power wirelessly to a single receiver via strongly coupled magnetic resonance. A simple circuit model is used to help explain the multiple-transmitter wireless power transfer system. Through this particular scheme, there is an increase in gain and “diversity” of the transmitted power according to the number of transmit coils. The effect of transmitter resonant coil coupling is also shown. Resonant frequency detuning due to nearby metallic objects is observed, and the extent of how much tuning can be done is demonstrated. A practical power line synchronization technique is proposed to synchronize all transmit coils, which reduces additional dedicated synchronization wiring or the addition of an RF front-end module to send the reference driving signal.", "title": "" } ]
scidocsrr
bb5a102285102a5701bfef87cd60312e
A Survey of Software Estimation Techniques and Project Planning Practices
[ { "docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb", "text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.", "title": "" } ]
[ { "docid": "e694d8429af455984a0ebde5ae10794a", "text": "Huntington’s disease (HD) is a heredodegenerative neurological disorder with chorea and other hyperkinetic movement disorders being part of the disease spectrum. These along with cognitive and neurobehavioral manifestations contribute significantly to patient’s disability. Several classes of drugs have been used to treat the various symptoms of HD. These include typical and atypical neuroleptics along with dopamine depletors for treatment of chorea and antidepressants, GABA agonists, antiepileptic medications, cholinesterase inhibitors, antiglutamatergic drugs and botulinum toxin for treatment of other manifestations. Tetrabenazine (TBZ), a dopamine depleting medication was recently approved by the US FDA for treatment of chorea in HD. The purpose of this article is to briefly review information regarding HD and current treatments for chorea and specifically focus on TBZ and review the literature related to its use in HD chorea.", "title": "" }, { "docid": "e4d1a0be0889aba00b80a2d6cdc2335b", "text": "This study uses a multi-period structural model developed by Chen and Yeh (2006), which extends the Geske-Johnson (1987) compound option model to evaluate the performance of capital structure arbitrage under a multi-period debt structure. Previous studies exploring capital structure arbitrage have typically employed single-period structural models, which have very limited empirical scopes. In this paper, we predict the default situations of a firm using the multi-period Geske-Johnson model that assumes endogenous default barriers. The Geske-Johnson model is the only model that accounts for the entire debt structure and imputes the default barrier to the asset value of the firm. This study also establishes trading strategies and analyzes the arbitrage performance of 369 North American obligators from 2004 to 2008. Comparing the performance of capital structure arbitrage between the Geske-Johnson and CreditGrades models, we find that the extended Geske-Johnson model is more suitable than the CreditGrades model for exploiting the mispricing between equity prices and credit default swap spreads.", "title": "" }, { "docid": "856f79a56ddbaea3e117bf5386bf57f4", "text": "The transition from marine to freshwater habitats is one of the major steps in the evolution of life. In the decapod crustaceans, four groups have colonized fresh water at different geological times since the Triassic, the freshwater shrimps, freshwater crayfish, freshwater crabs and freshwater anomurans. Some families have even colonized terrestrial habitats via the freshwater route or directly via the sea shore. Since none of these taxa has ever reinvaded its environment of origin the Decapoda appear particularly suitable to investigate life-history adaptations to fresh water. Evolutionary comparison of marine, freshwater and terrestrial decapods suggests that the reduction of egg number, abbreviation of larval development, extension of brood care and lecithotrophy of the first posthatching life stages are key adaptations to fresh water. Marine decapods usually have high numbers of small eggs and develop through a prolonged planktonic larval cycle, whereas the production of small numbers of large eggs, direct development and extended brood care until the juvenile stage is the rule in freshwater crayfish, primary freshwater crabs and aeglid anomurans. The amphidromous freshwater shrimp and freshwater crab species and all terrestrial decapods that invaded land via the sea shore have retained ocean-type planktonic development. Abbreviation of larval development and extension of brood care are interpreted as adaptations to the particularly strong variations of hydrodynamic parameters, physico-chemical factors and phytoplankton availability in freshwater habitats. These life-history changes increase fitness of the offspring and are obviously favoured by natural selection, explaining their multiple origins in fresh water. There is no evidence for their early evolution in the marine ancestors of the extant freshwater groups and a preadaptive role for the conquest of fresh water. The costs of the shift from relative r- to K-strategy in freshwater decapods are traded-off against fecundity, future reproduction and growth of females and perhaps against size of species but not against longevity of species. Direct development and extension of brood care is associated with the reduction of dispersal and gene flow among populations, which may explain the high degree of speciation and endemism in directly developing freshwater decapods. Direct development and extended brood care also favour the evolution of social systems, which in freshwater decapods range from simple subsocial organization to eusociality. Hermaphroditism and parthenogenesis, which have evolved in some terrestrial crayfish burrowers and invasive open water crayfish, respectively, may enable populations to adapt to restrictive or new environments by spatio-temporal alteration of their socio-ecological characteristics. Under conditions of rapid habitat loss, environmental pollution and global warming, the reduced dispersal ability of direct developers may turn into a severe disadvantage, posing a higher threat of extinction to freshwater crayfish, primary freshwater crabs, aeglids and landlocked freshwater shrimps as compared to amphidromous freshwater shrimps and secondary freshwater crabs.", "title": "" }, { "docid": "9c67b538a5e6806273b26d9c5899ef42", "text": "Back propagation training algorithms have been implemented by many researchers for their own purposes and provided publicly on the internet for others to use in veriication of published results and for reuse in unrelated research projects. Often, the source code of a package is used as the basis for a new package for demonstrating new algorithm variations, or some functionality is added speciically for analysis of results. However, there are rarely any guarantees that the original implementation is faithful to the algorithm it represents, or that its code is bug free or accurate. This report attempts to look at a few implementations and provide a test suite which shows deeciencies in some software available which the average researcher may not be aware of, and may not have the time to discover on their own. This test suite may then be used to test the correctness of new packages.", "title": "" }, { "docid": "3817cbe08b92d780fb0c462ec5f359ce", "text": "Stability is an important yet under-addressed issue in feature selection from high-dimensional and small sample data. In this paper, we show that stability of feature selection has a strong dependency on sample size. We propose a novel framework for stable feature selection which first identifies consensus feature groups from subsampling of training samples, and then performs feature selection by treating each consensus feature group as a single entity. Experiments on both synthetic and real-world data sets show that an algorithm developed under this framework is effective at alleviating the problem of small sample size and leads to more stable feature selection results and comparable or better generalization performance than state-of-the-art feature selection algorithms. Synthetic data sets and algorithm source code are available at http://www.cs.binghamton.edu/~lyu/KDD09/.", "title": "" }, { "docid": "b60555d52e5a8772ba128b184ec6de73", "text": "Standardized 32-bit Cyclic Redundancy Codes provide fewer bits of guaranteed error detection than they could, achieving a Hamming Distance (HD) of only 4 for maximum-length Ethernet messages, whereas HD=6 is possible. Although research has revealed improved codes, exploring the entire design space has previously been computationally intractable, even for special-purpose hardware. Moreover, no CRC polynomial has yet been found that satisfies an emerging need to attain both HD=6 for 12K bit messages and HD=4 for message lengths beyond 64K bits. This paper presents results from the first exhaustive search of the 32-bit CRC design space. Results from previous research are validated and extended to include identifying all polynomials achieving a better HD than the IEEE 802.3 CRC-32 polynomial. A new class of polynomials is identified that provides HD=6 up to nearly 16K bit and HD=4 up to 114K bit message lengths, providing the best achievable design point that maximizes error detection for both legacy and new applications, including potentially iSCSI and application-implemented error checks.", "title": "" }, { "docid": "43a24625e781e8cb6824f61d59e9333d", "text": "In this work, we present a new software environment for the comparative evaluation of algorithms for grasping and dexterous manipulation. The key aspect in its development is to provide a tool that allows the reproduction of well-defined experiments in real-life scenarios in every laboratory and, hence, benchmarks that pave the way for objective comparison and competition in the field of grasping. In order to achieve this, experiments are performed on a sound open-source software platform with an extendable structure in order to be able to include a wider range of benchmarks defined by robotics researchers. The environment is integrated into the OpenGRASP toolkit that is built upon the OpenRAVE project and includes grasp-specific extensions and a tool for the creation/integration of new robot models. Currently, benchmarks for grasp and motion planningare included as case studies, as well as a library of domestic everyday objects models, and a real-life scenario that features a humanoid robot acting in a kitchen.", "title": "" }, { "docid": "131f119361582f0d538413680dfafd9d", "text": "In this paper, the problems of current web search engines are analyzed, and the need for a new design is justified. Some ideas on how to improve current web search engines are presented, and then an adaptive method for web meta-search engines with a multi-agent specially the mobile agents is presented to make search engines work more efficiently. In the method, the cooperation between stationary and mobile agents is used to make more efficiency. The meta-search engine gives the user needed documents based on the multi-stage mechanism. The merge of the results obtained from the search engines in the network is done in parallel. Using a reduction parallel algorithm, the efficiency of this method is increased. Furthermore, a feedback mechanism gives the meta-search engine the user’s suggestions about the found documents, which leads to a new query using a genetic algorithm. In the new search stage, more relevant documents are given to the user. The practical experiments were performed in Aglets programming environment. The results achieved from these experiments confirm the efficiency and adaptability of the method.", "title": "" }, { "docid": "c97e005d827b712e7d61d8a911c3bed6", "text": "Industries and individuals outsource database to realize convenient and low-cost applications and services. In order to provide sufficient functionality for SQL queries, many secure database schemes have been proposed. However, such schemes are vulnerable to privacy leakage to cloud server. The main reason is that database is hosted and processed in cloud server, which is beyond the control of data owners. For the numerical range query (“>,” “<,” and so on), those schemes cannot provide sufficient privacy protection against practical challenges, e.g., privacy leakage of statistical properties, access pattern. Furthermore, increased number of queries will inevitably leak more information to the cloud server. In this paper, we propose a two-cloud architecture for secure database, with a series of intersection protocols that provide privacy preservation to various numeric-related range queries. Security analysis shows that privacy of numerical information is strongly protected against cloud providers in our proposed scheme.", "title": "" }, { "docid": "2e4d1b5b1c1a8dbeba0d17025f2a2471", "text": "In this age of globalization, the need for competent legal translators is greater than ever. This perhaps explains the growing interest in legal translation not only by linguists but also by lawyers, the latter especially over the past 10 years (cf. Berteloot, 1999:101). Although Berteloot maintains that lawyers analyze the subject matter from a different perspective, she advises her colleagues also to take account of contributions by linguists (ibid.). I assume this includes translation theory as well. In the past, both linguists and lawyers have attempted to apply theories of general translation to legal texts, such as Catford’s concept of situation equivalence (Kielar, 1977:33), Nida’s theory of formal correspondence (Weisflog, 1987:187, 191); also in Weisflog 1996:35), and, more recently, Vermeer’s skopos theory (see Madsen’s, 1997:17-26). While some legal translators seem content to apply principles of general translation theory (Koutsivitis, 1988:37), others dispute the usefulness of translation theory for legal translation (Weston, 1991:1). The latter view is not surprising since special methods and techniques are required in legal translation, a fact confirmed by Bocquet, who recognizes the importance of establishing a theory or at least a theoretical framework that is practice oriented (1994). By analyzing legal translation as an act of communication in the mechanism of the law, my book New Approach to Legal Translation (1997) attempts to provide a theoretical basis for legal translation within the framework of modern translation theory.", "title": "" }, { "docid": "8f9e3bb85b4a2fcff3374fd700ac3261", "text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.", "title": "" }, { "docid": "3c1297b61456db30faefefc19bc079bd", "text": "The present paper examined the structure of Dutch adolescents’ music preferences, the stability of music preferences and the relations between Big-Five personality characteristics and (changes in) music preferences. Exploratory and confirmatory factor analyses of music-preference data from 2334 adolescents aged 12–19 revealed four clearly interpretable music-preference dimensions: Rock, Elite, Urban and Pop/Dance. One thousand and forty-four randomly selected adolescents from the original sample filled out questionnaires on music preferences and personality at three follow-up measurements. In addition to being relatively stable over 1, 2 and 3-year intervals, music preferences were found to be consistently related to personality characteristics, generally confirming prior research in the United States. Personality characteristics were also found to predict changes in music preferences over a 3-year interval. Copyright # 2007 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "f4d9190ad9123ddcf809f47c71225162", "text": "Please cite this article in press as: Tseng, M Industrial Engineering (2009), doi:10.1016/ Selection of appropriate suppliers in supply chain management strategy (SCMS) is a challenging issue because it requires battery of evaluation criteria/attributes, which are characterized with complexity, elusiveness, and uncertainty in nature. This paper proposes a novel hierarchical evaluation framework to assist the expert group to select the optimal supplier in SCMS. The rationales for the evaluation framework are based upon (i) multi-criteria decision making (MCDM) analysis that can select the most appropriate alternative from a finite set of alternatives with reference to multiple conflicting criteria, (ii) analytic network process (ANP) technique that can simultaneously take into account the relationships of feedback and dependence of criteria, and (iii) choquet integral—a non-additive fuzzy integral that can eliminate the interactivity of expert subjective judgment problems. A case PCB manufacturing firm is studied and the results indicated that the proposed evaluation framework is simple and reasonable to identify the primary criteria influencing the SCMS, and it is effective to determine the optimal supplier even with the interactive and interdependent criteria/attributes. This hierarchical evaluation framework provides a complete picture in SCMS contexts to both researchers and practitioners. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bd1523c64d8ec69d87cbe68a4d73ea17", "text": "BACKGROUND AND OBJECTIVE\nThe effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library.\n\n\nMETHODS\nBased on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library.\n\n\nRESULTS\nWe have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest.\n\n\nCONCLUSIONS\nThe IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems.", "title": "" }, { "docid": "c5e29d6477aa183ad340448d5e3df193", "text": "The shift to cloud technologies is a paradigm change that offers considerable financial and administrative gains. However governmental and business institutions wanting to tap into these gains are concerned with security issues. The cloud presents new vulnerabilities and is dominated by new kinds of applications, which calls for new security solutions. Intuitively, Byzantine fault tolerant (BFT) replication has many benefits to enforce integrity and availability in clouds. Existing BFT systems, however, are not suited for typical “data-flow processing” cloud applications which analyze large amounts of data in a parallelizable manner: indeed, existing BFT solutions focus on replicating single monolithic servers, whilst data-flow applications consist in several different stages, each of which may give rise to multiple components at runtime to exploit cheap hardware parallelism; similarly, BFT replication hinges on comparison of redundant outputs generated, which in the case of data-flow processing can represent huge amounts of data. In fact, current limits of data processing directly depend on the amount of data that can be processed per time unit. In this paper we present ClusterBFT, a system that secures computations being run in the cloud by leveraging BFT replication coupled with fault isolation. In short, ClusterBFT leverages a combination of variable-degree clustering, approximated and offline output comparison, smart deployment, and separation of duty, to achieve a parameterized tradeoff between fault tolerance and overhead in practice. We demonstrate the low overhead achieved with ClusterBFT when securing dataflow computations expressed in Apache Pig, and Hadoop. Our solution allows assured computation with less than 10 percent latency overhead as shown by our evaluation.", "title": "" }, { "docid": "01a95065526771523795494c9968efb9", "text": "Depression is one of the most common and debilitating psychiatric disorders and is a leading cause of suicide. Most people who become depressed will have multiple episodes, and some depressions are chronic. Persons with bipolar disorder will also have manic or hypomanic episodes. Given the recurrent nature of the disorder, it is important not just to treat the acute episode, but also to protect against its return and the onset of subsequent episodes. Several types of interventions have been shown to be efficacious in treating depression. The antidepressant medications are relatively safe and work for many patients, but there is no evidence that they reduce risk of recurrence once their use is terminated. The different medication classes are roughly comparable in efficacy, although some are easier to tolerate than are others. About half of all patients will respond to a given medication, and many of those who do not will respond to some other agent or to a combination of medications. Electro-convulsive therapy is particularly effective for the most severe and resistant depressions, but raises concerns about possible deleterious effects on memory and cognition. It is rarely used until a number of different medications have been tried. Although it is still unclear whether traditional psychodynamic approaches are effective in treating depression, interpersonal psychotherapy (IPT) has fared well in controlled comparisons with medications and other types of psychotherapies. It also appears to have a delayed effect that improves the quality of social relationships and interpersonal skills. It has been shown to reduce acute distress and to prevent relapse and recurrence so long as it is continued or maintained. Treatment combining IPT with medication retains the quick results of pharmacotherapy and the greater interpersonal breadth of IPT, as well as boosting response in patients who are otherwise more difficult to treat. The main problem is that IPT has only recently entered clinical practice and is not widely available to those in need. Cognitive behavior therapy (CBT) also appears to be efficacious in treating depression, and recent studies suggest that it can work for even severe depressions in the hands of experienced therapists. Not only can CBT relieve acute distress, but it also appears to reduce risk for the return of symptoms as long as it is continued or maintained. Moreover, it appears to have an enduring effect that reduces risk for relapse or recurrence long after treatment is over. Combined treatment with medication and CBT appears to be as efficacious as treatment with medication alone and to retain the enduring effects of CBT. There also are indications that the same strategies used to reduce risk in psychiatric patients following successful treatment can be used to prevent the initial onset of depression in persons at risk. More purely behavioral interventions have been studied less than the cognitive therapies, but have performed well in recent trials and exhibit many of the benefits of cognitive therapy. Mood stabilizers like lithium or the anticonvulsants form the core treatment for bipolar disorder, but there is a growing recognition that the outcomes produced by modern pharmacology are not sufficient. Both IPT and CBT show promise as adjuncts to medication with such patients. The same is true for family-focused therapy, which is designed to reduce interpersonal conflict in the family. Clearly, more needs to be done with respect to treatment of the bipolar disorders. Good medical management of depression can be hard to find, and the empirically supported psychotherapies are still not widely practiced. As a consequence, many patients do not have access to adequate treatment. Moreover, not everyone responds to the existing interventions, and not enough is known about what to do for people who are not helped by treatment. Although great strides have been made over the past few decades, much remains to be done with respect to the treatment of depression and the bipolar disorders.", "title": "" }, { "docid": "83a4a89d3819009d61123a146b38d0e9", "text": "OBJECTIVE\nBehçet's disease (BD) is a chronic, relapsing, inflammatory vascular disease with no pathognomonic test. Low sensitivity of the currently applied International Study Group (ISG) clinical diagnostic criteria led to their reassessment.\n\n\nMETHODS\nAn International Team for the Revision of the International Criteria for BD (from 27 countries) submitted data from 2556 clinically diagnosed BD patients and 1163 controls with BD-mimicking diseases or presenting at least one major BD sign. These were randomly divided into training and validation sets. Logistic regression, 'leave-one-country-out' cross-validation and clinical judgement were employed to develop new International Criteria for BD (ICBD) with the training data. Existing and new criteria were tested for their performance in the validation set.\n\n\nRESULTS\nFor the ICBD, ocular lesions, oral aphthosis and genital aphthosis are each assigned 2 points, while skin lesions, central nervous system involvement and vascular manifestations 1 point each. The pathergy test, when used, was assigned 1 point. A patient scoring ≥4 points is classified as having BD. In the training set, 93.9% sensitivity and 92.1% specificity were assessed compared with 81.2% sensitivity and 95.9% specificity for the ISG criteria. In the validation set, ICBD demonstrated an unbiased estimate of sensitivity of 94.8% (95% CI: 93.4-95.9%), considerably higher than that of the ISG criteria (85.0%). Specificity (90.5%, 95% CI: 87.9-92.8%) was lower than that of the ISG-criteria (96.0%), yet still reasonably high. For countries with at least 90%-of-cases and controls having a pathergy test, adding 1 point for pathergy test increased the estimate of sensitivity from 95.5% to 98.5%, while barely reducing specificity from 92.1% to 91.6%.\n\n\nCONCLUSION\nThe new proposed criteria derived from multinational data exhibits much improved sensitivity over the ISG criteria while maintaining reasonable specificity. It is proposed that the ICBD criteria to be adopted both as a guide for diagnosis and classification of BD.", "title": "" }, { "docid": "16f5686c1675d0cf2025cf812247ab45", "text": "This paper presents the system analysis and implementation of a soft switching Sepic-Cuk converter to achieve zero voltage switching (ZVS). In the proposed converter, the Sepic and Cuk topologies are combined together in the output side. The features of the proposed converter are to reduce the circuit components (share the power components in the transformer primary side) and to share the load current. Active snubber is connected in parallel with the primary side of transformer to release the energy stored in the leakage inductor of transformer and to limit the peak voltage stress of switching devices when the main switch is turned off. The active snubber can achieve ZVS turn-on for power switches. Experimental results, taken from a laboratory prototype rated at 300W, are presented to verify the effectiveness of the proposed converter. I. Introduction Modern", "title": "" }, { "docid": "e7f7f75ffb56c96167610361b7937ad8", "text": "Emergence of crypto-ransomware has significantly changed the cyber threat landscape. A crypto ransomware removes data custodian access by encrypting valuable data on victims’ computers and requests a ransom payment to reinstantiate custodian access by decrypting data. Timely detection of ransomware very much depends on how quickly and accurately system logs can be mined to hunt abnormalities and stop the evil. In this paper we first setup an environment to collect activity logs of 517 Locky ransomware samples, 535 Cerber ransomware samples and 572 samples of TeslaCrypt ransomware. We utilize Sequential Pattern Mining to find Maximal Frequent Patterns (MFP) of activities within different ransomware families as candidate features for classification using J48, Random Forest, Bagging and MLP algorithms. We could achieve 99% accuracy in detecting ransomware instances from goodware samples and 96.5% accuracy in detecting family of a given ransomware sample. Our results indicate usefulness and practicality of applying pattern mining techniques in detection of good features for ransomware hunting. Moreover, we showed existence of distinctive frequent patterns within different ransomware families which can be used for identification of a ransomware sample family for building intelligence about threat actors and threat profile of a given target.", "title": "" }, { "docid": "a5ff7c80c36f354889e3f48e94052195", "text": "A meta-analysis examined emotion recognition within and across cultures. Emotions were universally recognized at better-than-chance levels. Accuracy was higher when emotions were both expressed and recognized by members of the same national, ethnic, or regional group, suggesting an in-group advantage. This advantage was smaller for cultural groups with greater exposure to one another, measured in terms of living in the same nation, physical proximity, and telephone communication. Majority group members were poorer at judging minority group members than the reverse. Cross-cultural accuracy was lower in studies that used a balanced research design, and higher in studies that used imitation rather than posed or spontaneous emotional expressions. Attributes of study design appeared not to moderate the size of the in-group advantage.", "title": "" } ]
scidocsrr
6a507747f732660c0cfe0d0ab36bcefe
Privacy-preserving verifiable data aggregation and analysis for cloud-assisted mobile crowdsourcing
[ { "docid": "97fee760308f95398b6717a091a977d2", "text": "We introduce and formalize the notion of Verifiable Computation , which enables a computationally weak client to “outsource” the computation of a functio n F on various dynamically-chosen inputs x1, ...,xk to one or more workers. The workers return the result of the fu nction evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi . The primary constraint is that the verification of the proof should requi re substantially less computational effort than computingF(xi) from scratch. We present a protocol that allows the worker to return a compu tationally-sound, non-interactive proof that can be verified inO(m· poly(λ)) time, wherem is the bit-length of the output of F , andλ is a security parameter. The protocol requires a one-time pr e-processing stage by the client which takes O(|C| · poly(λ)) time, whereC is the smallest known Boolean circuit computing F . Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about t hexi or yi values.", "title": "" }, { "docid": "4da68af0db0b1e16f3597c8820b2390d", "text": "We study the task of verifiable delegation of computation on encrypted data. We improve previous definitions in order to tolerate adversaries that learn whether or not clients accept the result of a delegated computation. In this strong model, we construct a scheme for arbitrary computations and highly efficient schemes for delegation of various classes of functions, such as linear combinations, high-degree univariate polynomials, and multivariate quadratic polynomials. Notably, the latter class includes many useful statistics. Using our solution, a client can store a large encrypted dataset on a server, query statistics over this data, and receive encrypted results that can be efficiently verified and decrypted.\n As a key contribution for the efficiency of our schemes, we develop a novel homomorphic hashing technique that allows us to efficiently authenticate computations, at the same cost as if the data were in the clear, avoiding a $10^4$ overhead which would occur with a naive approach. We support our theoretical constructions with extensive implementation tests that show the practical feasibility of our schemes.", "title": "" } ]
[ { "docid": "fc3beed303b26fb7f58327b34153751d", "text": "Ingestion of ethylene glycol may be an important contributor in patients with metabolic acidosis of unknown cause and subsequent renal failure. Expeditious diagnosis and treatment will limit metabolic toxicity and decrease morbidity and mortality. Ethylene glycol poisoning should be suspected in an intoxicated patient with anion gap acidosis, hypocalcemia, urinary crystals, and nontoxic blood alcohol concentration. Fomepizole is a newer agent with a specific indication for the treatment of ethylene glycol poisoning. Metabolic acidosis is resolved within three hours of initiating therapy. Initiation of fomepizole therapy before the serum creatinine concentration rises can minimize renal impairment. Compared with traditional ethanol treatment, advantages of fomepizole include lack of depression of the central nervous system and hypoglycemia, and easier maintenance of effective plasma levels.", "title": "" }, { "docid": "b7b01049a4cc9cfd2dd951ee1302bfbc", "text": "This article describes the design, implementation, and results of the latest installment of the dermoscopic image analysis benchmark challenge. The goal is to support research and development of algorithms for automated diagnosis of melanoma, the most lethal skin cancer. The challenge was divided into 3 tasks: lesion segmentation, feature detection, and disease classification. Participation involved 593 registrations, 81 pre-submissions, 46 finalized submissions (including a 4-page manuscript), and approximately 50 attendees, making this the largest standardized and comparative study in this field to date. While the official challenge duration and ranking of participants has concluded, the dataset snapshots remain available for further research and development.", "title": "" }, { "docid": "a34efaa2a8739cce020cb5fe1da6883d", "text": "Graphical models, as applied to multi-target prediction problems, commonly utilize interaction terms to impose structure among the output variables. Often, such structure is based on the assumption that related outputs need to be similar and interaction terms that force them to be closer are adopted. Here we relax that assumption and propose a feature that is based on distance and can adapt to ensure that variables have smaller or larger difference in values. We utilized a Gaussian Conditional Random Field model, where we have extended its originally proposed interaction potential to include a distance term. The extended model is compared to the baseline in various structured regression setups. An increase in predictive accuracy was observed on both synthetic examples and real-world applications, including challenging tasks from climate and healthcare domains.", "title": "" }, { "docid": "b8095fb49846c89a74cc8c0f69891877", "text": "Attitudes held with strong moral conviction (moral mandates) were predicted to have different interpersonal consequences than strong but nonmoral attitudes. After controlling for indices of attitude strength, the authors explored the unique effect of moral conviction on the degree that people preferred greater social (Studies 1 and 2) and physical (Study 3) distance from attitudinally dissimilar others and the effects of moral conviction on group interaction and decision making in attitudinally homogeneous versus heterogeneous groups (Study 4). Results supported the moral mandate hypothesis: Stronger moral conviction led to (a) greater preferred social and physical distance from attitudinally dissimilar others, (b) intolerance of attitudinally dissimilar others in both intimate (e.g., friend) and distant relationships (e.g., owner of a store one frequents), (c) lower levels of good will and cooperativeness in attitudinally heterogeneous groups, and (d) a greater inability to generate procedural solutions to resolve disagreements.", "title": "" }, { "docid": "0241cef84d46b942ee32fc7345874b90", "text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.", "title": "" }, { "docid": "2a6b46006cc7cc0fd11d29fd4f77ea88", "text": "Quantitative market research and qualitative user-centered design research have long had an uneasy and complex relationship. A trend toward increasingly complex statistical segmentations and associated personas will once again increase the urgency of addressing paradigm differences to allow the two disciplines to collaborate effectively.\n We present an instructive case in which qualitative field research helped contribute to abandoning a \"state of the art\" quantitative user segmentation that was used in an attempt to unify both marketing and user experience planning around a shared model of users. This case exposes risks in quantitative segmentation research, common fallacies in the evolving practice of segmentation and use of personas, and the dangers of excessive deference to quantitative research generally.", "title": "" }, { "docid": "526556eefeebfc61a84d17ba865f3f9d", "text": "Browser fingerprinting is a relatively new method of uniquely identifying browsers that can be used to track web users. In some ways it is more privacy-threatening than tracking via cookies, as users have no direct control over it. A number of authors have considered the wide variety of techniques that can be used to fingerprint browsers; however, relatively little information is available on how widespread browser fingerprinting is, and what information is collected to create these fingerprints in the real world. To help address this gap, we crawled the 10,000 most popular websites; this gave insights into the number of websites that are using the technique, which websites are collecting fingerprinting information, and exactly what information is being retrieved. We found that approximately 69% of websites are, potentially, involved in first-party or third-party browser fingerprinting. We further found that third-party browser fingerprinting, which is potentially more privacy-damaging, appears to be predominant in practice. We also describe FingerprintAlert, a freely available browser add-on we developed that detects and, optionally, blocks fingerprinting attempts by visited websites.", "title": "" }, { "docid": "01b463ffb4a95e6cb9885674f594425c", "text": "Compact hardware architectures are proposed for the ISO/IEC 10118-3 standard hash function Whirlpool. In order to reduce the circuit area, the 512-bit function block ρ[k] for the main datapath is divided into smaller sub-blocks with 256-, 128-, or 64-bit buses, and the sub-blocks are used iteratively. Six architectures are designed by combining the three different datapath widths and two data scheduling techniques: interleave and pipeline. The six architectures in conjunction with three different types of S-box were synthesized using a 90-nm CMOS standard cell library, with two optimization options: size and speed. A total of 18 implementations were obtained, and their performances were compared with conventional designs using the same standard cell library. The highest hardware efficiency (defined by throughput per gate) of 372.3 Kbps/gate was achieved by the proposed pipeline architecture with the 256-bit datapath optimized for speed. The interleaved architecture with the 64-bit datapath optimized for size showed the smallest size of 13.6 Kgates, which requires only 46% of the resources of the conventional compact architecture.", "title": "" }, { "docid": "38d3dc6b5eb1dbf85b1a371b645a17da", "text": "Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10--30% of the time on average, but they are often left on, while idle, utilizing 60% or more of peak power when in the idle state.\n We introduce a dynamic capacity management policy, AutoScale, that greatly reduces the number of servers needed in data centers driven by unpredictable, time-varying load, while meeting response time SLAs. AutoScale scales the data center capacity, adding or removing servers as needed. AutoScale has two key features: (i) it autonomically maintains just the right amount of spare capacity to handle bursts in the request rate; and (ii) it is robust not just to changes in the request rate of real-world traces, but also request size and server efficiency.\n We evaluate our dynamic capacity management approach via implementation on a 38-server multi-tier data center, serving a web site of the type seen in Facebook or Amazon, with a key-value store workload. We demonstrate that AutoScale vastly improves upon existing dynamic capacity management policies with respect to meeting SLAs and robustness.", "title": "" }, { "docid": "a00f344024cc1df9049485a5c548551a", "text": "This paper describes the first achievement of over 20,000 quality factors among on-chip relaxation oscillators. The proposed Power Averaging Feedback with a Chopped Amplifier enables such a high Q which is close to MEMS oscillators. 1/f noise free design and rail-to-rail oscillation result in low phase noise with small area and low power consumption. The proposed oscillator can be applied to low noise applications (e.g. digital audio players) implemented onto a System on a Chip.", "title": "" }, { "docid": "b1d1fee5a2caab09e220a5914d402118", "text": "Internet is an important tool for information searching and purchasing of products especially in tourism. This paper investigates the impact of demographic and travel characteristics of hotel guests on online and offline reservations. It also examines the importance of hotel attributes in selecting a hotel and the differences in priorities between guests who book their accommodation online and offline. Data was collected by surveys of guests in three hotels in Dubrovnik. The results of the study showed that the method of booking mostly depends on the nature of travel, and that the importance of hotel attributes does not differ much between different groups of guests.", "title": "" }, { "docid": "43a24d75aa3138ef7e1e473c58de1e2e", "text": "The application of active flow control on a vertical tail of a typical twin engine aircraft was investigated. Sweeping jets installed into the rudder surface were used and their effect was assessed by force measurements, flow visualization and local pressure distributions. The airfoil forming the tail is a NACA 0012 with a rudder using 35% of its chord. The tests were carried out at the Lucas Wind Tunnel at the California Institute of Technology at representative Reynolds numbers of up to Re=1.5 million. Multiple flap deflections and spanwise actuator configurations were tested resulting in an increase of up to 50-70% in side force depending on the free stream velocity and momentum input.", "title": "" }, { "docid": "078578f356cb7946e3956c571bef06ee", "text": "Background: Dysphagia is common and costly. The ability of patient symptoms to predict objective swallowing dysfunction is uncertain. Purpose: This study aimed to evaluate the ability of the Eating Assessment Tool (EAT-10) to screen for aspiration risk in patients with dysphagia. Methods: Data from individuals with dysphagia undergoing a videofluoroscopic swallow study between January 2012 and July 2013 were abstracted from a clinical database. Data included the EAT-10, Penetration Aspiration Scale (PAS), total pharyngeal transit (TPT) time, and underlying diagnoses. Bivariate linear correlation analysis, sensitivity, specificity, and predictive values were calculated. Results: The mean age of the entire cohort (N = 360) was 64.40 (± 14.75) years. Forty-six percent were female. The mean EAT-10 was 16.08 (± 10.25) for nonaspirators and 23.16 (± 10.88) for aspirators (P < .0001). There was a linear correlation between the total EAT-10 score and the PAS (r = 0.273, P < .001). Sensitivity and specificity of an EAT-10 > 15 in predicting aspiration were 71% and 53%, respectively. Conclusion: Subjective dysphagia symptoms as documented with the EAT-10 can predict aspiration risk. A linear correlation exists between the EAT-10 and aspiration events (PAS) and aspiration risk (TPT time). Persons with an EAT10 > 15 are 2.2 times more likely to aspirate (95% confidence interval, 1.3907-3.6245). The sensitivity of an EAT-10 > 15 is 71%.", "title": "" }, { "docid": "eccfdebca0bc5b0ea4ce62f718591910", "text": "In this paper we forecast hotspots of street crime in Portland, Oregon. Our approach uses geosocial media posts, which define the predictors in geographically weighted regression (GWR) models. We use two predictors that are both derived from Twitter data. The first one is the population at risk of being victim of street crime. The second one is the crime related tweets. These two predictors were used in GWR to create models that depict future street crime hotspots. The predicted hotspots enclosed more than 23% of the future street crimes in 1% of the study area and also outperformed the prediction efficiency of a baseline approach. Future work will focus on optimizing the prediction parameters and testing the applicability of this approach to other mobile crime types. 2012 ACM Subject Classification Information systems → Geographic information systems", "title": "" }, { "docid": "52f95d1c0e198c64455269fd09108703", "text": "Dynamic control theory has long been used in solving optimal asset allocation problems, and a number of trading decision systems based on reinforcement learning methods have been applied in asset allocation and portfolio rebalancing. In this paper, we extend the existing work in recurrent reinforcement learning (RRL) and build an optimal variable weight portfolio allocation under a coherent downside risk measure, the expected maximum drawdown, E(MDD). In particular, we propose a recurrent reinforcement learning method, with a coherent risk adjusted performance objective function, the Calmar ratio, to obtain both buy and sell signals and asset allocation weights. Using a portfolio consisting of the most frequently traded exchange-traded funds, we show that the expected maximum drawdown risk based objective function yields superior return performance compared to previously proposed RRL objective functions (i.e. the Sharpe ratio and the Sterling ratio), and that variable weight RRL long/short portfolios outperform equal weight RRL long/short portfolios under different transaction cost scenarios. We further propose an adaptive E(MDD) risk based RRL portfolio rebalancing decision system with a transaction cost and market condition stop-loss retraining mechanism, and we show that the ∗Corresponding author: Steve Y. Yang, Postal address: School of Business, Stevens Institute of Technology, 1 Castle Point on Hudson, Hoboken, NJ 07030 USA. Tel.: +1 201 216 3394 Fax: +1 201 216 5385 Email addresses: salmahdi@stevens.edu (Saud Almahdi), steve.yang@stevens.edu (Steve Y. Yang) Preprint submitted to Expert Systems with Applications June 15, 2017", "title": "" }, { "docid": "2da6c199c7561855fde9be6f4798a4af", "text": "Ontogenetic development of the digestive system in golden pompano (Trachinotus ovatus, Linnaeus 1758) larvae was histologically and enzymatically studied from hatch to 32 day post-hatch (DPH). The development of digestive system in golden pompano can be divided into three phases: phase I starting from hatching and ending at the onset of exogenous feeding; phase II starting from first feeding (3 DPH) and finishing at the formation of gastric glands; and phase III starting from the appearance of gastric glands on 15 DPH and continuing onward. The specific activities of trypsin, amylase, and lipase increased sharply from the onset of first feeding to 5–7 DPH, followed by irregular fluctuations. Toward the end of this study, the specific activities of trypsin and amylase showed a declining trend, while the lipase activity remained at similar levels as it was at 5 DPH. The specific activity of pepsin was first detected on 15 DPH and increased with fish age. The dynamics of digestive enzymes corresponded to the structural development of the digestive system. The enzyme activities tend to be stable after the formation of the gastric glands in fish stomach on 15 DPH. The composition of digestive enzymes in larval pompano indicates that fish are able to digest protein, lipid and carbohydrate at early developmental stages. Weaning of larval pompano is recommended from 15 DPH onward. Results of the present study lead to a better understanding of the ontogeny of golden pompano during the larval stage and provide a guide to feeding and weaning of this economically important fish in hatcheries.", "title": "" }, { "docid": "fc0b9bd0fa975e71800dd1610f2a4bb3", "text": "Data Mining with Bayesian Network learning has two important characteristics: under conditions learned edges between variables correspond to casual influences, and second, for every variable T in the network a special subset (Markov Blanket) identifiable by the network is the minimal variable set required to predict T. However, all known algorithms learning a complete BN do not scale up beyond a few hundred variables. On the other hand, all known sound algorithms learning a local region of the network require an exponential number of training instances to the size of the learned region.The contribution of this paper is two-fold. We introduce a novel local algorithm that returns all variables with direct edges to and from a target variable T as well as a local algorithm that returns the Markov Blanket of T. Both algorithms (i) are sound, (ii) can be run efficiently in datasets with thousands of variables, and (iii) significantly outperform in terms of approximating the true neighborhood previous state-of-the-art algorithms using only a fraction of the training size required by the existing methods. A fundamental difference between our approach and existing ones is that the required sample depends on the generating graph connectivity and not the size of the local region; this yields up to exponential savings in sample relative to previously known algorithms. The results presented here are promising not only for discovery of local causal structure, and variable selection for classification, but also for the induction of complete BNs.", "title": "" }, { "docid": "31bf58e44a2c6747a79fc4bb549e1465", "text": "Today's WiFi access points (APs) are ubiquitous, and provide critical connectivity for a wide range of mobile networking devices. Many management tasks, e.g. optimizing AP placement and detecting rogue APs, require a user to efficiently determine the location of wireless APs. Unlike prior localization techniques that require either specialized equipment or extensive outdoor measurements, we propose a way to locate APs in real-time using commodity smartphones. Our insight is that by rotating a wireless receiver (smartphone) around a signal-blocking obstacle (the user's body), we can effectively emulate the sensitivity and functionality of a directional antenna. Our measurements show that we can detect these signal strength artifacts on multiple smartphone platforms for a variety of outdoor environments. We develop a model for detecting signal dips caused by blocking obstacles, and use it to produce a directional analysis technique that accurately predicts the direction of the AP, along with an associated confidence value. The result is Borealis, a system that provides accurate directional guidance and leads users to a desired AP after a few measurements. Detailed measurements show that Borealis is significantly more accurate than other real-time localization systems, and is nearly as accurate as offline approaches using extensive wireless measurements.", "title": "" }, { "docid": "35625af7c8f2b12c8425c2398e025ef8", "text": "Child stunting in India exceeds that in poorer regions like sub-Saharan Africa. Data on over 168,000 children show that, relative to Africa, India's height disadvantage increases sharply with birth order. We posit that India’s steep birth order gradient is due to favoritism toward eldest sons, which affects parents' fertility decisions and resource allocation across children. We show that, within India, the gradient is steeper for high-son-preference regions and religions. The gradient also varies with sibling gender as predicted. A back-of-the-envelope calculation suggests that India's steeper birth order gradient can explain over one-half of the India-Africa gap in average child height.", "title": "" }, { "docid": "01cc1b289f68fa396655b9e374b6aaa9", "text": "The biological mechanisms underlying long-term partner bonds in humans are unclear. The evolutionarily conserved neuropeptide oxytocin (OXT) is associated with the formation of partner bonds in some species via interactions with brain dopamine reward systems. However, whether it plays a similar role in humans has as yet not been established. Here, we report the results of a discovery and a replication study, each involving a double-blind, placebo-controlled, within-subject, pharmaco-functional MRI experiment with 20 heterosexual pair-bonded male volunteers. In both experiments, intranasal OXT treatment (24 IU) made subjects perceive their female partner's face as more attractive compared with unfamiliar women but had no effect on the attractiveness of other familiar women. This enhanced positive partner bias was paralleled by an increased response to partner stimuli compared with unfamiliar women in brain reward regions including the ventral tegmental area and the nucleus accumbens (NAcc). In the left NAcc, OXT even augmented the neural response to the partner compared with a familiar woman, indicating that this finding is partner-bond specific rather than due to familiarity. Taken together, our results suggest that OXT could contribute to romantic bonds in men by enhancing their partner's attractiveness and reward value compared with other women.", "title": "" } ]
scidocsrr
5f318eda27d19bc0b6d5dca948543156
HANDEXOS: Towards an exoskeleton device for the rehabilitation of the hand
[ { "docid": "8d7ece4b518223bc8156b173875d06e3", "text": "This paper presents two robot devices for use in the rehabilitation of upper limb movements and reports the quantitative parameters obtained to characterize the rate of improvement, thus allowing a precise monitoring of patient's recovery. A one degree of freedom (DoF) wrist manipulator and a two-DoF elbow-shoulder manipulator were designed using an admittance control strategy; if the patient could not move the handle, the devices completed the motor task. Two groups of chronic post-stroke patients (G1 n=7, and G2 n=9) were enrolled in a three week rehabilitation program including standard physical therapy (45 min daily) plus treatment by means of robot devices, respectively, for wrist and elbow-shoulder movements (40 min, twice daily). Both groups were evaluated by means of standard clinical assessment scales and a new robot measured evaluation metrics that included an active movement index quantifying the patient's ability to execute the assigned motor task without robot assistance, the mean velocity, and a movement accuracy index measuring the distance of the executed path from the theoretic one. After treatment, both groups improved their motor deficit and disability. In G1, there was a significant change in the clinical scale values (p<0.05) and range of motion wrist extension (p<0.02). G2 showed a significant change in clinical scales (p<0.01), in strength (p<0.05) and in the robot measured parameters (p<0.01). The relationship between robot measured parameters and the clinical assessment scales showed a moderate and significant correlation (r>0.53 p<0.03). Our findings suggest that robot-aided neurorehabilitation may improve the motor outcome and disability of chronic post-stroke patients. The new robot measured parameters may provide useful information about the course of treatment and its effectiveness at discharge.", "title": "" } ]
[ { "docid": "c13bf429abfb718e6c3557ae71f45f8f", "text": "Researchers who study punishment and social control, like those who study other social phenomena, typically seek to generalize their findings from the data they have to some larger context: in statistical jargon, they generalize from a sample to a population. Generalizations are one important product of empirical inquiry. Of course, the process by which the data are selected introduces uncertainty. Indeed, any given dataset is but one of many that could have been studied. If the dataset had been different, the statistical summaries would have been different, and so would the conclusions, at least by a little. How do we calibrate the uncertainty introduced by data collection? Nowadays, this question has become quite salient, and it is routinely answered using wellknown methods of statistical inference, with standard errors, t-tests, and P-values, culminating in the “tabular asterisks” of Meehl (1978). These conventional answers, however, turn out to depend critically on certain rather restrictive assumptions, for instance, random sampling.1 When the data are generated by random sampling from a clearly defined population, and when the goal is to estimate population parameters from sample statistics, statistical inference can be relatively straightforward. The usual textbook formulas apply; tests of statistical significance and confidence intervals follow. If the random-sampling assumptions do not apply, or the parameters are not clearly defined, or the inferences are to a population that is only vaguely defined, the calibration of uncertainty offered by contemporary statistical technique is in turn rather questionable.2 Thus, investigators who use conventional statistical technique", "title": "" }, { "docid": "dc98ddb6033ca1066f9b0ba5347a3d0c", "text": "Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. The result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. This work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.", "title": "" }, { "docid": "642078190a7df09c19d012b492152540", "text": "Research has examined the benefits and costs of employing adults with autism spectrum disorder (ASD) from the perspective of the employee, taxpayer and society, but few studies have considered the employer perspective. This study examines the benefits and costs of employing adults with ASD, from the perspective of employers. Fifty-nine employers employing adults with ASD in open employment were asked to complete an online survey comparing employees with and without ASD on the basis of job similarity. The findings suggest that employing an adult with ASD provides benefits to employers and their organisations without incurring additional costs.", "title": "" }, { "docid": "59ba83e88085445e3bcf009037af6617", "text": "— We examine the relationship between resource abundance and several indicators of human welfare. Consistent with the existing literature on the relationship between resource abundance and economic growth we find that, given an initial income level, resource-intensive countries tend to suffer lower levels of human development. While we find only weak support for a direct link between resources and welfare, there is an indirect link that operates through institutional quality. There are also significant differences in the effects that resources have on different measures of institutional quality. These results imply that the ‘‘resource curse’’ is a more encompassing phenomenon than previously considered, and that key differences exist between the effects of different resource types on various aspects of governance and human welfare. 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ddbf68174da624f4d2f19fc25cafc870", "text": "Large scale streaming systems aim to provide high throughput and low latency. They are often used to run mission-critical applications, and must be available 24x7. Thus such systems need to adapt to failures and inherent changes in workloads, with minimal impact on latency and throughput. Unfortunately, existing solutions require operators to choose between achieving low latency during normal operation and incurring minimal impact during adaptation. Continuous operator streaming systems, such as Naiad and Flink, provide low latency during normal execution but incur high overheads during adaptation (e.g., recovery), while micro-batch systems, such as Spark Streaming and FlumeJava, adapt rapidly at the cost of high latency during normal operations.\n Our key observation is that while streaming workloads require millisecond-level processing, workload and cluster properties change less frequently. Based on this, we develop Drizzle, a system that decouples the processing interval from the coordination interval used for fault tolerance and adaptability. Our experiments on a 128 node EC2 cluster show that on the Yahoo Streaming Benchmark, Drizzle can achieve end-to-end record processing latencies of less than 100ms and can get 2-3x lower latency than Spark. Drizzle also exhibits better adaptability, and can recover from failures 4x faster than Flink while having up to 13x lower latency during recovery.", "title": "" }, { "docid": "6236c8617cd2fff285896f54a054ca5f", "text": "This paper introduces an innovative contactless energy transfer system for power supplying rotatable parts of automatic machineries as a robust alternative to sliding contacts. The design procedure of the key system devices, such as the dc/ac converter and the rotary transformer with its windings, is discussed. Different configurations in terms of switching network, number of turns, wire, and compensation are discussed with respect to their impact on the system efficiency and power factor. A thermal model to account for the core temperature is also introduced. Two setups, operating at 50 kHz and able to transfer up to 1.3 kW in a contactless fashion, are realized and tested to supply sealing roller resistors of automatic machineries. The system prototypes are then experimentally tested to validate the analytical models and the finite element simulations of the entire wireless power transfer link. Trade-offs emerge from the setups, in particular concerning efficiency, power factor, and electro-magnetic compatibility. As a consequence, we discuss the choice of the optimal configuration depending on the final application.", "title": "" }, { "docid": "b9921c1ec7fc2b6c88748ba7f9346524", "text": "As the interest in the representation of context dependent knowledge in the Semantic Web has been recognized, a number of logic based solutions have been proposed in this regard. In our recent works, in response to this need, we presented the description logic-based Contextualized Knowledge Repository (CKR) framework. CKR is not only a theoretical framework, but it has been effectively implemented over state-of-the-art tools for the management of Semantic Web data: inference inside and across contexts has been realized in the form of forward SPARQL-based rules over different RDF named graphs. In this paper we present the first evaluation results for such CKR implementation. In particular, in first experiment we study its scalability with respect to different reasoning regimes. In a second experiment we analyze the effects of knowledge propagation on the computation of inferences.", "title": "" }, { "docid": "de970d5359f2bf5ed510852e8d68d57d", "text": "The effect of dietary Bacillus-based direct-fed microbials (DFMs; eight single strains designated as Bs2084, LSSAO1, 3AP4, Bs18, 15AP4, 22CP1, Bs27, and Bs278, and one multiple-strain DFM product [AVICORR]) on growth performance, intestinal lesions, and innate and acquired immunities were evaluated in broiler chickens following Eimeria maxima (EM) infection. EM-induced reduction of body weight gain and intestinal lesions were significantly decreased by addition of 15AP4 or Bs27 into broiler diets compared with EM-infected control birds. Serum nitric oxide levels were increased in infected chickens fed with Bs27, but lowered in those given Bs2084, LSSAO1, 3AP4 or 15AP4 compared with the infected controls. Recombinant coccidial antigen (3-1E)-stimulated spleen cell proliferation was increased in chickens given Bs27, 15AP4, LSSAO1, 3AP4, or Bs18, compared with the infected controls. Finally, all experimental diets increased concanavalin A-induced splenocyte mitogenesis in infected broilers compared with the nonsupplemented and infected controls. In summary, dietary Bacillus subtilis-based DFMs reduced the clinical signs of experimental avian coccidiosis and increased various parameters of immunity in broiler chickens in a strain-dependent manner.", "title": "" }, { "docid": "1abc8cbd17d1de7ee50430eb65b62fec", "text": "Digital immersion is moving into public space. Interactive screens and public displays are deployed in urban environments, malls, and shop windows. Inner city areas, airports, train stations and stadiums are experiencing a transformation from traditional to digital displays enabling new forms of multimedia presentation and new user experiences. Imagine a walkway with digital displays that allows a user to immerse herself in her favorite content while moving through public space. In this paper we discuss the fundamentals for creating exciting public displays and multimedia experiences enabling new forms of engagement with digital content. Interaction in public space and with public displays can be categorized in phases, each having specific requirements. Attracting, engaging and motivating the user are central design issues that are addressed in this paper. We provide a comprehensive analysis of the design space explaining mental models and interaction modalities and we conclude a taxonomy for interactive public display from this analysis. Our analysis and the taxonomy are grounded in a large number of research projects, art installations and experience. With our contribution we aim at providing a comprehensive guide for designers and developers of interactive multimedia on public displays.", "title": "" }, { "docid": "d509601659e2192fb4ea8f112c9d75fe", "text": "Computer vision has advanced significantly that many discriminative approaches such as object recognition are now widely used in real applications. We present another exciting development that utilizes generative models for the mass customization of medical products such as dental crowns. In the dental industry, it takes a technician years of training to design synthetic crowns that restore the function and integrity of missing teeth. Each crown must be customized to individual patients, and it requires human expertise in a time-consuming and laborintensive process, even with computer assisted design software. We develop a fully automatic approach that learns not only from human designs of dental crowns, but also from natural spatial profiles between opposing teeth. The latter is hard to account for by technicians but important for proper biting and chewing functions. Built upon a Generative Adversarial Network architecture (GAN), our deep learning model predicts the customized crown-filled depth scan from the crown-missing depth scan and opposing depth scan. We propose to incorporate additional space constraints and statistical compatibility into learning. Our automatic designs exceed human technicians’ standards for good morphology and functionality, and our algorithm is being tested for production use.", "title": "" }, { "docid": "06b99205e1dc53e5120a22dc4f927aa0", "text": "The last 2 decades witnessed a surge in empirical studies on the variables associated with achievement in higher education. A number of meta-analyses synthesized these findings. In our systematic literature review, we included 38 meta-analyses investigating 105 correlates of achievement, based on 3,330 effect sizes from almost 2 million students. We provide a list of the 105 variables, ordered by the effect size, and summary statistics for central research topics. The results highlight the close relation between social interaction in courses and achievement. Achievement is also strongly associated with the stimulation of meaningful learning by presenting information in a clear way, relating it to the students, and using conceptually demanding learning tasks. Instruction and communication technology has comparably weak effect sizes, which did not increase over time. Strong moderator effects are found for almost all instructional methods, indicating that how a method is implemented in detail strongly affects achievement. Teachers with high-achieving students invest time and effort in designing the microstructure of their courses, establish clear learning goals, and employ feedback practices. This emphasizes the importance of teacher training in higher education. Students with high achievement are characterized by high self-efficacy, high prior achievement and intelligence, conscientiousness, and the goal-directed use of learning strategies. Barring the paucity of controlled experiments and the lack of meta-analyses on recent educational innovations, the variables associated with achievement in higher education are generally well investigated and well understood. By using these findings, teachers, university administrators, and policymakers can increase the effectivity of higher education. (PsycINFO Database Record", "title": "" }, { "docid": "4ac3c3fb712a1121e0990078010fe4b0", "text": "1.1 Introduction Relational data has two characteristics: first, statistical dependencies exist between the entities we wish to model, and second, each entity often has a rich set of features that can aid classification. For example, when classifying Web documents, the page's text provides much information about the class label, but hyperlinks define a relationship between pages that can improve classification [Taskar et al., 2002]. Graphical models are a natural formalism for exploiting the dependence structure among entities. Traditionally, graphical models have been used to represent the joint probability distribution p(y, x), where the variables y represent the attributes of the entities that we wish to predict, and the input variables x represent our observed knowledge about the entities. But modeling the joint distribution can lead to difficulties when using the rich local features that can occur in relational data, because it requires modeling the distribution p(x), which can include complex dependencies. Modeling these dependencies among inputs can lead to intractable models, but ignoring them can lead to reduced performance. A solution to this problem is to directly model the conditional distribution p(y|x), which is sufficient for classification. This is the approach taken by conditional random fields [Lafferty et al., 2001]. A conditional random field is simply a conditional distribution p(y|x) with an associated graphical structure. Because the model is", "title": "" }, { "docid": "1d56775bf9e993e0577c4d03131e7dc4", "text": "Canonical correlation analysis (CCA) is a classical method for seeking correlations between two multivariate data sets. During the last ten years, it has received more and more attention in the machine learning community in the form of novel computational formulations and a plethora of applications. We review recent developments in Bayesian models and inference methods for CCA which are attractive for their potential in hierarchical extensions and for coping with the combination of large dimensionalities and small sample sizes. The existing methods have not been particularly successful in fulfilling the promise yet; we introduce a novel efficient solution that imposes group-wise sparsity to estimate the posterior of an extended model which not only extracts the statistical dependencies (correlations) between data sets but also decomposes the data into shared and data set-specific components. In statistics literature the model is known as inter-battery factor analysis (IBFA), for which we now provide a Bayesian treatment.", "title": "" }, { "docid": "a28e63820d658c6e272000dacb7096fd", "text": "The feature-based graphical approach to robotic mapping provides a representationally rich and computationally efficient framework for an autonomous agent to learn a model of its environment. However, this formulation does not naturally support long-term autonomy because it lacks a notion of environmental change; in reality, “everything changes and nothing stands still, ” and any mapping and localization system that aims to support truly persistent autonomy must be similarly adaptive. To that end, in this paper we propose a novel feature-based model of environmental evolution over time. Our approach is based upon the development of an expressive probabilistic generative feature persistence model that describes the survival of abstract semi-static environmental features over time. We show that this model admits a recursive Bayesian estimator, the persistence filter, that provides an exact online method for computing, at each moment in time, an explicit Bayesian belief over the persistence of each feature in the environment. By incorporating this feature persistence estimation into current state-of-the-art graphical mapping techniques, we obtain a flexible, computationally efficient, and information-theoretically rigorous framework for lifelong environmental modeling in an ever-changing world.", "title": "" }, { "docid": "31dbedbcdb930ead1f8274ff2c181fcb", "text": "This paper sums up lessons learned from a sequence of cooperative design workshops where end users were enabled to design mobile systems through scenario building, role playing, and low-fidelity prototyping. We present a resulting fixed workshop structure with well-chosen constraints that allows for end users to explore and design new technology and work practices. In these workshops, the systems developers get input to design from observing how users stage and act out current and future use scenarios and improvise new technology to fit their needs. A theoretical framework is presented to explain the creative processes involved and the workshop as a user-centered design method. Our findings encourage us to recommend the presented workshop structure for design projects involving mobility and computer-mediated communication, in particular project where the future use of the resulting products and services also needs to be designed.", "title": "" }, { "docid": "318a4af201ed3563443dcbe89c90b6b4", "text": "Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems. Keywords—Cloud Computing Security; Distributed Denial of Service; Intrusion Detection; Intrusion Prevention; Virtualisation", "title": "" }, { "docid": "bb492930d57356bd84b2304cfdefa1fb", "text": "To convert wave energy into more suitable forms efficiently, a single-phase permanent magnet (PM) ac linear generator directly coupled to wave energy conversion is presented in this paper. Magnetic field performance of Halbach PM arrays is compared with that of radially magnetized structure. Then, the change of parameters in the geometry of slot and Halbach PM arrays' effect on the electromagnetic properties of the generator are investigated, and the optimization design guides are established for key design parameters. Finally, the simulation results are compared with test results of the prototype in wave energy conversion experimental system. Due to test and theory analysis results of prototype concordant with the finite-element analysis results, the proposed model and analysis method are correct and meet the requirements of direct-drive wave energy conversion system.", "title": "" }, { "docid": "ef8a61d3ff3aad461c57fe893e0b5bb6", "text": "In this paper, we propose an underwater wireless sensor network (UWSN) named SOUNET where sensor nodes form and maintain a tree-topological network for data gathering in a self-organized manner. After network topology discovery via packet flooding, the sensor nodes consistently update their parent node to ensure the best connectivity by referring to the timevarying neighbor tables. Such a persistent and self-adaptive method leads to high network connectivity without any centralized control, even when sensor nodes are added or unexpectedly lost. Furthermore, malfunctions that frequently happen in self-organized networks such as node isolation and closed loop are resolved in a simple way. Simulation results show that SOUNET outperforms other conventional schemes in terms of network connectivity, packet delivery ratio (PDR), and energy consumption throughout the network. In addition, we performed an experiment at the Gyeongcheon Lake in Korea using commercial underwater modems to verify that SOUNET works well in a real environment.", "title": "" } ]
scidocsrr
cfe14b3cf78743e22b126a7ad1777496
Voting for Voting in Online Point Cloud Object Detection
[ { "docid": "3734b4a9e6fe0d031a24e3cb92f22f95", "text": "In this work we address joint object category and instance recognition in the context of RGB-D (depth) cameras. Motivated by local distance learning, where a novel view of an object is compared to individual views of previously seen objects, we define a view-to-object distance where a novel view is compared simultaneously to all views of a previous object. This novel distance is based on a weighted combination of feature differences between views. We show, through jointly learning per-view weights, that this measure leads to superior classification performance on object category and instance recognition. More importantly, the proposed distance allows us to find a sparse solution via Group-Lasso regularization, where a small subset of representative views of an object is identified and used, with the rest discarded. This significantly reduces computational cost without compromising recognition accuracy. We evaluate the proposed technique, Instance Distance Learning (IDL), on the RGB-D Object Dataset, which consists of 300 object instances in 51 everyday categories and about 250,000 views of objects with both RGB color and depth. We empirically compare IDL to several alternative state-of-the-art approaches and also validate the use of visual and shape cues and their combination.", "title": "" }, { "docid": "95b93b1c67349e98fe803ee120193329", "text": "This paper tackles the problem of segmenting things that could move from 3D laser scans of urban scenes. In particular, we wish to detect instances of classes of interest in autonomous driving applications - cars, pedestrians and bicyclists - amongst significant background clutter. Our aim is to provide the layout of an end-to-end pipeline which, when fed by a raw stream of 3D data, produces distinct groups of points which can be fed to downstream classifiers for categorisation. We postulate that, for the specific classes considered in this work, solving a binary classification task (i.e. separating the data into foreground and background first) outperforms approaches that tackle the multi-class problem directly. This is confirmed using custom and third-party datasets gathered of urban street scenes. While our system is agnostic to the specific clustering algorithm deployed we explore the use of a Euclidean Minimum Spanning Tree for an end-to-end segmentation pipeline and devise a RANSAC-based edge selection criterion.", "title": "" } ]
[ { "docid": "e187403127990eb4b6c256ceb61d6f37", "text": "Modern data analysis stands at the interface of statistics, computer science, and discrete mathematics. This volume describes new methods in this area, with special emphasis on classification and cluster analysis. Those methods are applied to problems in information retrieval, phylogeny, medical dia... This is the first book primarily dedicated to clustering using multiobjective genetic algorithms with extensive real-life applications in data mining and bioinformatics. The authors first offer detailed introductions to the relevant techniques-genetic algorithms, multiobjective optimization, soft ...", "title": "" }, { "docid": "7a5626554753733d8f20a15fe161beca", "text": "Worldwide, lung cancer is the most common cause of major cancer incidence and mortality in men, whereas in women it is the third most common cause of cancer incidence and the second most common cause of cancer mortality. In 2010 the American Cancer Society estimated that lung cancer would account for more than 222,520 new cases in the United States during 2010 and 157,300 cancer deaths. Although lung cancer incidence in the United States began to decline in men in the early 1980s, it seems to have plateaued in women. Lung cancer can be diagnosed pathologically either by a histologic or cytologic approach. The new International Association for the Study of Lung Cancer (IASLC)/American Thoracic Society (ATS)/ European Respiratory Society (ERS) Lung Adenocarcinoma Classification has made major changes in how lung adenocarcinoma is diagnosed. It will significantly alter the structure of the previous 2004 World Health Organization (WHO) classification of lung tumors (Box 1). Not only does it address classification in resectionspecimens (seeBox1), but it also makes recommendations applicable to small biopsies and cytology specimens, for diagnostic termsandcriteria forothermajor histologic subtypes inaddition to adenocarcinoma (Table1). The4major histologic types of lung cancer are squamous cell carcinoma, adenocarcinoma, small cell carcinoma, and large cell carcinoma. These major types can be subclassified into more specific subtypes such as lepidic predominant subtype of adenocarcinoma or the basaloid variant of large cell carcinoma.More detailed reviews of the pathology, cytology, and molecular biology of lung cancer can be found elsewhere.", "title": "" }, { "docid": "c0ccdaa5aab1852be070ef342b893cde", "text": "Snakebite envenoming is a neglected public health challenge of compelling importance in many regions of the world, particularly sub-Saharan Africa, Asia, Latin America and Papua-New Guinea. Addressing the problem of snakebite effectively demands an integrated multifocal approach, targeting complex problems and involving many participants. It must comprise: (a) Acquisition of reliable information on the incidence and mortality attributable to snakebite envenoming, and the number of people left with permanent sequelae. (b) Improvements in production of effective and safe antivenoms, through strategies aimed at strengthening the technological capacity of antivenom manufacturing laboratories. (c) Increasing the capacity of low-income countries to produce specific immunogens(snake venoms) locally, and to perform their own quality control of antivenoms. (d) Commitments from regional producers to manufacture antivenoms for countries where antivenom production is not currently feasible. (e) Implementation of financial initiatives guaranteeing the acquisition of adequate volumes of antivenom at affordable prices in low-income countries. (f) Performance of collaborative studies on the safety and effectiveness of antivenoms assessed preclinically and by properly designed clinical trials. (g) Development of antivenom distribution programmes tailored to the real needs and epidemiological situations of rural areas in each country. (h) Permanent training programmes for health staff, particularly in rural areas where snakebites are frequent.(i) Implementation of programmes to support those people whose snakebites resulted in chronic disabilities. (j) Preventive and educational programmes at the community level, with the active involvement of local organizations and employing modern methods of health promotion. Such an integrated approach, currently being fostered by the Global Snake Bite Initiative of the International Society on Toxinology and by the World Health Organization, will help to alleviate the enormous burden of human suffering inflicted by snakebite envenoming.", "title": "" }, { "docid": "3b584918e05d5e7c0c34f3ad846285d3", "text": "Recently, there is increasing interest and research on the interpretability of machine learning models, for example how they transform and internally represent EEG signals in Brain-Computer Interface (BCI) applications. This can help to understand the limits of the model and how it may be improved, in addition to possibly provide insight about the data itself. Schirrmeister et al. (2017) have recently reported promising results for EEG decoding with deep convolutional neural networks (ConvNets) trained in an end-to-end manner and, with a causal visualization approach, showed that they learn to use spectral amplitude changes in the input. In this study, we investigate how ConvNets represent spectral features through the sequence of intermediate stages of the network. We show higher sensitivity to EEG phase features at earlier stages and higher sensitivity to EEG amplitude features at later stages. Intriguingly, we observed a specialization of individual stages of the network to the classical EEG frequency bands alpha, beta, and high gamma. Furthermore, we find first evidence that particularly in the last convolutional layer, the network learns to detect more complex oscillatory patterns beyond spectral phase and amplitude, reminiscent of the representation of complex visual features in later layers of ConvNets in computer vision tasks. Our findings thus provide insights into how ConvNets hierarchically represent spectral EEG features in their intermediate layers and suggest that ConvNets can exploit and might help to better understand the compositional structure of EEG time series.", "title": "" }, { "docid": "a08e1710d15b69ea23980daa722ace0d", "text": "Olympic combat sports separate athletes into weight divisions, in an attempt to reduce size, strength, range and/or leverage disparities between competitors. Official weigh-ins are conducted anywhere from 3 and up to 24 h prior to competition ensuring athletes meet weight requirements (i.e. have 'made weight'). Fighters commonly aim to compete in weight divisions lower than their day-to-day weight, achieved via chronic and acute manipulations of body mass (BM). Although these manipulations may impair health and absolute performance, their strategic use can improve competitive success. Key considerations are the acute manipulations around weigh-in, which differ in importance, magnitude and methods depending on the requirements of the individual combat sport and the weigh-in regulations. In particular, the time available for recovery following weigh-in/before competition will determine what degree of acute BM loss can be implemented and reversed. Increased exercise and restricted food and fluid intake are undertaken to decrease body water and gut contents reducing BM. When taken to the extreme, severe weight-making practices can be hazardous, and efforts have been made to reduce their prevalence. Indeed some have called for the abolition of these practices altogether. In lieu of adequate strategies to achieve this, and the pragmatic recognition of the likely continuation of these practices as long as regulations allow, this review summarises guidelines for athletes and coaches for manipulating BM and optimising post weigh-in recovery, to achieve better health and performance outcomes across the different Olympic combat sports.", "title": "" }, { "docid": "3531f08daf40f88915eadba307252c6f", "text": "Although some crowdsourcing aggregation models have been introduced to aggregate noisy crowd labels, these models mostly consider single-option (i.e. discrete) crowd labels as the input variables, and are not compatible with multi-option (i.e. non-deterministic) crowd data. In this paper, we propose a novel joint generative-discriminative aggregation model, which is able to efficiently deal with both single-option and multi-option crowd labels. Considering the confidence of workers for each option as the input data, we first introduce a new discriminative aggregation model, called Constrained Weighted Majority Voting (CWMVL1), which improves the performance of majority voting method. CWMVL1 considers flexible reliability parameters for crowd workers, employs L1-norm loss function to deal with noisy crowd data, and includes optimization constraints to have probabilistic outputs. We prove that our object is convex, and derive an efficient optimization algorithm. Moreover, we integrate the discriminative CWMVL1 model with a generative model, resulting in a powerful joint aggregation model. Combination of these sub-models is obtained in a probabilistic framework rather than a heuristic way. For our joint model, we derive an efficient optimization algorithm, which alternates between updating the parameters and estimating the potential true labels. Experimental results indicate that the proposed aggregation models achieve superior or competitive results in comparison with the state-of-the-art models on single-option and multi-option crowd datasets, while having faster convergence rates and more reliable predictions.", "title": "" }, { "docid": "49a53a8cb649c93d685e832575acdb28", "text": "We address the vehicle detection and classification problems using Deep Neural Networks (DNNs) approaches. Here we answer to questions that are specific to our application including how to utilize DNN for vehicle detection, what features are useful for vehicle classification, and how to extend a model trained on a limited size dataset, to the cases of extreme lighting condition. Answering these questions we propose our approach that outperforms state-of-the-art methods, and achieves promising results on image with extreme lighting conditions.", "title": "" }, { "docid": "9d8b088c8a97b8aa52703c1fcf877675", "text": "The project proposes an efficient implementation for IoT (Internet of Things) used for monitoring and controlling the home appliances via World Wide Web. Home automation system uses the portable devices as a user interface. They can communicate with home automation network through an Internet gateway, by means of low power communication protocols like Zigbee, Wi-Fi etc. This project aims at controlling home appliances via Smartphone using Wi-Fi as communication protocol and raspberry pi as server system. The user here will move directly with the system through a web-based interface over the web, whereas home appliances like lights, fan and door lock are remotely controlled through easy website. An extra feature that enhances the facet of protection from fireplace accidents is its capability of sleuthing the smoke in order that within the event of any fireplace, associates an alerting message and an image is sent to Smartphone. The server will be interfaced with relay hardware circuits that control the appliances running at home. The communication with server allows the user to select the appropriate device. The communication with server permits the user to pick out the acceptable device. The server communicates with the corresponding relays. If the web affiliation is down or the server isn't up, the embedded system board still will manage and operate the appliances domestically. By this we provide a climbable and price effective Home Automation system.", "title": "" }, { "docid": "91f5c7b130a7eadef8df1b596cda1eaf", "text": "It is well-established that within crisis-related communications, rumors are likely to emerge. False rumors, i.e. misinformation, can be detrimental to crisis communication and response; it is therefore important not only to be able to identify messages that propagate rumors, but also corrections or denials of rumor content. In this work, we explore the task of automatically classifying rumor stances expressed in crisisrelated content posted on social media. Utilizing a dataset of over 4,300 manually coded tweets, we build a supervised machine learning model for this task, achieving an accuracy over 88% across a diverse set of rumors of different types.", "title": "" }, { "docid": "6ac9888d5474e4fce65825cca5e1101a", "text": "To better understand clinical empathy and what factors can undermine its experience and outcome in care-giving settings, a large-scale study was conducted with 7,584 board certified practicing physicians. Online validated instruments assessing different aspects of empathy, distress, burnout, altruistic behavior, emotional awareness, and well-being were used. Compassion satisfaction was strongly associated with empathic concern, perspective taking and altruism, while compassion fatigue (burnout and secondary traumatic stress) was more closely related to personal distress and alexithymia. Gender had a highly selective effect on empathic concern, with women displaying higher values, which led to a wide array of negative and devalued feelings. Years of experience did not influence dispositional measures per se after controlling for the effect of age and gender. Participants who experienced compassion fatigue with little to no compassion satisfaction showed the highest scores on personal distress and alexithymia as well as the strongest indicators of compassion fatigue. Physicians who have difficulty regulating their negative arousal and describing and identifying emotions seem to be more prone to emotional exhaustion, detachment, and a low sense of accomplishment. On the contrary, the ability to engage in self-other awareness and regulate one's emotions and the tendency to help others, seem to contribute to the sense of compassion that comes from assisting patients in clinical practice.", "title": "" }, { "docid": "935c404529b02cee2620e52f7a09b84d", "text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.", "title": "" }, { "docid": "b395aa3ae750ddfd508877c30bae3a38", "text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.", "title": "" }, { "docid": "649b1f289395aa6251fe9f3288209b67", "text": "Besides game-based learning, gamification is an upcoming trend in education, studied in various empirical studies and found in many major learning management systems. Employing a newly developed qualitative instrument for assessing gamification in a system, we studied five popular LMS for their specific implementations. The instrument enabled experts to extract affordances for gamification in the five categories experiential, mechanics, rewards, goals, and social. Results show large similarities in all of the systems studied and few varieties in approaches to gamification.", "title": "" }, { "docid": "515cbc485480e094320f23d142bd3b84", "text": "Development of Emotional Intelligence Training for Certified Registered Nurse Anesthetists by Rickey King MSNA, Gooding Institute of Nurse Anesthesia, 2006 BSN, Jacksonville University, 2003 ASN, Oklahoma State University, 1988 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Nursing Practice Walden University February 2016 Abstract The operating room is a high stress, high stakes, emotionally charged area with an interdisciplinary team that must work cohesively for the benefit of all. If an operating room staff does not understand those emotions, such a deficit can lead to decreased effective communication and an ineffectual response to problems. Emotional intelligence is a conceptual framework encompassing the ability to identify, assess, perceive, and manage emotions. The research question for this project is aimed at understanding how an educational intervention could help to improve the emotional intelligence of anesthetists and their ability to communicate with other operation room staff to produce effective problem solving. The purpose of this scholarly project was to design a 5-week evidence-based, educational intervention that will be implemented for 16 nurse anesthetists practicing in 3 rural hospitals in Southern Kentucky. The Emotional and Social Competency Inventory – University Edition will be offered to the nurse anesthetists prior to the educational intervention and 6 weeks post implementation to determine impact on the 12 core concepts of emotional intelligence which are categorized under self-awareness, social awareness, self-management, and relationship management. It is hoped that this project will improve emotional intelligence, which directly impacts interdisciplinary communication and produces effective problem solving and improved patient outcomes. The positive social change lies in the ability of the interdisciplinary participants to address stressful events benefitting patients, operating room personnel, and the anesthetist by decreasing negative outcomes and horizontal violence in the operating room.The operating room is a high stress, high stakes, emotionally charged area with an interdisciplinary team that must work cohesively for the benefit of all. If an operating room staff does not understand those emotions, such a deficit can lead to decreased effective communication and an ineffectual response to problems. Emotional intelligence is a conceptual framework encompassing the ability to identify, assess, perceive, and manage emotions. The research question for this project is aimed at understanding how an educational intervention could help to improve the emotional intelligence of anesthetists and their ability to communicate with other operation room staff to produce effective problem solving. The purpose of this scholarly project was to design a 5-week evidence-based, educational intervention that will be implemented for 16 nurse anesthetists practicing in 3 rural hospitals in Southern Kentucky. The Emotional and Social Competency Inventory – University Edition will be offered to the nurse anesthetists prior to the educational intervention and 6 weeks post implementation to determine impact on the 12 core concepts of emotional intelligence which are categorized under self-awareness, social awareness, self-management, and relationship management. It is hoped that this project will improve emotional intelligence, which directly impacts interdisciplinary communication and produces effective problem solving and improved patient outcomes. The positive social change lies in the ability of the interdisciplinary participants to address stressful events benefitting patients, operating room personnel, and the anesthetist by decreasing negative outcomes and horizontal violence in the operating room. Development of Emotional Intelligence Training for Certified Registered Nurse Anesthetists by Rickey King MSNA, Gooding Institute of Nurse Anesthesia, 2006 BSN, Jacksonville University, 2003 ASN, Oklahoma State University, 1988 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Nursing Practice", "title": "" }, { "docid": "4a84f6400edf8cf0d3a7245efae6e5f7", "text": "The explosive use of social media also makes it a popular platform for malicious users, known as social spammers, to overwhelm normal users with unwanted content. One effective way for social spammer detection is to build a classifier based on content and social network information. However, social spammers are sophisticated and adaptable to game the system with fast evolving content and network patterns. First, social spammers continually change their spamming content patterns to avoid being detected. Second, reflexive reciprocity makes it easier for social spammers to establish social influence and pretend to be normal users by quickly accumulating a large number of “human” friends. It is challenging for existing anti-spamming systems based on batch-mode learning to quickly respond to newly emerging patterns for effective social spammer detection. In this paper, we present a general optimization framework to collectively use content and network information for social spammer detection, and provide the solution for efficient online processing. Experimental results on Twitter datasets confirm the effectiveness and efficiency of the proposed framework. Introduction Social media services, like Facebook and Twitter, are increasingly used in various scenarios such as marketing, journalism and public relations. While social media services have emerged as important platforms for information dissemination and communication, it has also become infamous for spammers who overwhelm other users with unwanted content. The (fake) accounts, known as social spammers (Webb et al. 2008; Lee et al. 2010), are a special type of spammers who coordinate among themselves to launch various attacks such as spreading ads to generate sales, disseminating pornography, viruses, phishing, befriending victims and then surreptitiously grabbing their personal information (Bilge et al. 2009), or simply sabotaging a system’s reputation (Lee et al. 2010). The problem of social spamming is a serious issue prevalent in social media sites. Characterizing and detecting social spammers can significantly improve the quality of user experience, and promote the healthy use and development of a social networking system. Following spammer detection in traditional platforms like Email and the Web (Chen et al. 2012), some efforts have Copyright c 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. been devoted to detect spammers in various social networking sites, including Twitter (Lee et al. 2010), Renren (Yang et al. 2011), Blogosphere (Lin et al. 2007), etc. Existing methods can generally be divided into two categories. First category is to employ content analysis for detecting spammers in social media. Profile-based features (Lee et al. 2010) such as content and posting patterns are extracted to build an effective supervised learning model, and the model is applied on unseen data to filter social spammers. Another category of methods is to detect spammers via social network analysis (Ghosh et al. 2012). A widely used assumption in the methods is that spammers cannot establish an arbitrarily large number of social trust relations with legitimate users. The users with relatively low social influence or social status in the network will be determined as spammers. Traditional spammer detection methods become less effective due to the fast evolution of social spammers. First, social spammers show dynamic content patterns in social media. Spammers’ content information changes too fast to be detected by a static anti-spamming system based on offline modeling (Zhu et al. 2012). Spammers continue to change their spamming strategies and pretend to be normal users to fool the system. A built system may become less effective when the spammers create many new, evasive accounts. Second, many social media sites like Twitter have become a target of link farming (Ghosh et al. 2012). The reflexive reciprocity (Weng et al. 2010; Hu et al. 2013b) indicates that many users simply follow back when they are followed by someone for the sake of courtesy. It is easier for spammers to acquire a large number of follower links in social media. Thus, with the perceived social influence, they can avoid being detected by network-based methods. Similar results targeting other platforms such as Renren (Yang et al. 2011) have been reported in literature as well. Existing systems rely on building a new model to capture newly emerging content-based and network-based patterns of social spammers. Given the rapidly evolving nature, it is necessary to have a framework that efficiently reflects the effect of newly emerging data. One efficient approach to incrementally update existing model in large-scale data analysis is online learning. While online learning has been studied for years and shown its effectiveness in many applications such as image and video processing (Mairal et al. 2009) and human computer inProceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence", "title": "" }, { "docid": "64c3de6f135df88ab234068dc5d92b08", "text": "The drive to improve energy efficiency and reduce electrical loading has resulted in energy efficient lighting such as compact fluorescent lamps (CFLs) replacing conventional incandescent lamps. However, the presence of such non-linear loads has brought about the injection of voltage and current harmonics into electrical networks. As transformers are the interface between the supply and the non-linear loads, the investigation of their effects on transformer losses is of great importance. These harmonics can cause excessive loss and abnormal temperature rise in the transformers, thus reducing their operational life span. This paper investigates the impact of current and voltage harmonics of the loads on a single-phase 25kVA distribution transformer. Harmonic spectra of a range of non-linear loads including CFL, LED tube, PC and fluorescent lamp are obtained. A single-phase inverter is used for harmonic generation to simulate power supply harmonics injected into the transformer. Open circuit and short circuit tests are conducted on the transformer under the effect of harmonics, and the impacts on core loss are analyzed.", "title": "" }, { "docid": "622d11d4eefeacbed785ee6fcc14b69b", "text": "n our nursing program, we require a transcript for every course taken at any university or college, and it is always frustrating when we have to wait for copies to arrive before making our decisions. To be honest, if a candidate took Religion 101 at a community college and later transferred to the BSN program, I would be willing to pass on the community college transcript, but the admissions office is less flexible. And, although we used to be able to ask the student to have another copy sent if we did not have a transcript in the file, we now must wait for the student to have the college upload the transcript into an admissions system andwait for verification. I can assure you, most nurses, like other students today, take a lot of courses across many colleges without getting a degree. I sometimes have as many as 10 transcripts to review. When I saw an article titled “Blockchain: Letting Students Own Their Credentials” (Schaffnauser, 2017), I was therefore intrigued. I had already heard of blockchain as a tool to take the middleman out of the loop when doing financial transactions with Bitcoin. Now the thought of students owning their own credentials got me thinking about the movement toward new forms of credentialing from professional organizations (e.g., badges, certification documents). Hence, my decision to explore blockchain and its potential. Let’s start with some definitions. Simply put, blockchain is a distributed digital ledger. Technically speaking, it is “a peer-to-peer (P2P) distributed ledger technology for a new generation of transactional applications that establishes transparency and trust” (Linn & Koo, n.d.). Watter (2016) noted that “the blockchain is a distributed database that provides an unalterable, (semi-) public record of digital transactions. Each block aggregates a timestamped batch of transactions to be included in the ledger — or rather, in the blockchain. Each block is identified by a cryptographic signature. The blockchain contains an un-editable record of all the transactions made.” If we take this apart, here is what we have: a database that is distributed to computers associated with members of the network. Thus, rather than trying to access one central database, all members have copies of the database. Each time a transaction occurs, it is placed in a block that is given a time stamp and is “digitally signed using public key cryptography — which uses both a public and private key” (Watter, 2016). Locks are then connected so there is a historical record and they cannot be altered. According to Lin and Koo", "title": "" }, { "docid": "7b6c039783091260cee03704ce9748d8", "text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.", "title": "" }, { "docid": "2ae83e91df028cac5f98df60bb2f05f5", "text": "This paper reports on a multimethod study of knowledge transfer in international acquisitions. Using questionnaire data we show that the transfer of technological know-how is facilitated by communication, visits & meetings, and by time elapsed since acquisition, while the transfer of patents is associated with the articulability of the knowledge, the size of the acquired unit, and the recency of the acquisition. Using case study data, we show that the immediate post-acquisition period is characterized by imposed one-way transfers of knowledge from the acquirer to the acquired, but over time this gives way to high-quality reciprocal knowledge transfer.", "title": "" }, { "docid": "e1f78d2e6922d3ed4aa494bd73ebac05", "text": "Anxiety disorders such as obsessive-compulsive disorder (OCD), panic disorder, social anxiety disorder (SAD), generalized anxiety disorder (GA) and posttraumatic stress disorder (PTSD) are the most common of the psychiatric disorders worldwide with a prevalence of up to 18% (1). These disorders often only receive little attention by mental health services despite their great contribution to overall clinical burden and economic costs Combining Cognitive Behavioural Therapy and Pharmacotherapy in the Treatment of Anxiety Disorders: True Gains or False Hopes?", "title": "" } ]
scidocsrr
eb8acbe10d49c7e9f4bc4d5ce2d2ea3a
Mathematics of Deep Learning
[ { "docid": "2b797018d25c703295f69af3c2f8772e", "text": "We analyze stochastic gradient descent for optimizing nonconvex functions. In many cases for nonconvex functions the goal is to find a reasonable local minimu m, and the main concern is that gradient updates are trapped in saddle points . In this paper we identifystrict saddleproperty for non-convex problem that allows for efficient optimization. Using this p ro erty we show that stochastic gradient descent converges to a local minimum in a polynomial number o f iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on nonconvex functions with exponentially many local minima and s a dle points. Our analysis can be applied to orthogonal tensor decomposit ion, which is widely used in learning a rich class of latent variable models. We propose a new optim ization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergen ce guarantee.", "title": "" } ]
[ { "docid": "738a69ad1006c94a257a25c1210f6542", "text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.", "title": "" }, { "docid": "c8d5a8d13d3cd9e150537bd8957a4512", "text": "Classroom interactivity has a number of significant benefits: it promotes an active learning environment, provides greater feedback for lecturers, increases student motivation, and enables a learning community (Bishop, Dinkins, & Dominick, 2003; Mazur, 1998; McConnell et al., 2006). On the other hand, interactive activities for large classes (over 100 students) have proven to be quite difficult and, often, inefficient (Freeman & Blayney, 2005).", "title": "" }, { "docid": "0f49e229c08672dfba4026ec5ebca3bc", "text": "A grid array antenna is presented in this paper with sub grid arrays and multiple feed points, showing enhanced radiation characteristics and sufficient design flexibility. For instance, the grid array antenna can be easily designed as a linearly- or circularly-polarized, unbalanced or balanced antenna. A design example is given for a linearly-polarized unbalanced grid array antenna in Ferro A6M low temperature co-fired ceramic technology for 60-GHz radios to operate from 57 to 66 GHz (≈ 14.6% at 61.5 GHz ). It consists of 4 sub grid arrays and 4 feed points that are connected to a single-ended 50-Ω source by a quarter-wave matched T-junction network. The simulated results indicate that the grid array antenna has the maximum gain of 17.7 dBi at 59 GHz , an impedance bandwidth (|S11| ≤ -10&nbsp;dB) nearly from 56 to 67.5 GHz (or 18.7%), a 3-dB gain bandwidth from 55.4 to 66 GHz (or 17.2%), and a vertical beam bandwidth in the broadside direction from 57 to 66 GHz (14.6%). The measured results are compared with the simulated ones. Discrepancies and their causes are identified with a tolerance analysis on the fabrication process.", "title": "" }, { "docid": "9a6fbf264e212e6ee6b2d663042542f0", "text": "Detailed 3D visual models of indoor spaces, from walls and floors to objects and their configurations, can provide extensive knowledge about the environments as well as rich contextual information of people living therein. Vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges that only a few experts understand, let alone solve. In this work we utilize (Kinect style) consumer depth cameras to enable non-expert users to scan their personal spaces into 3D models. We build a prototype mobile system for 3D modeling that runs in real-time on a laptop, assisting and interacting with the user on-the-fly. Color and depth are jointly used to achieve robust 3D registration. The system offers online feedback and hints, tolerates human errors and alignment failures, and helps to obtain complete scene coverage. We show that our prototype system can both scan large environments (50 meters across) and at the same time preserve fine details (centimeter accuracy). The capability of detailed 3D modeling leads to many promising applications such as accurate 3D localization, measuring dimensions, and interactive visualization.", "title": "" }, { "docid": "197c9e1b89bf4d28ab8fc4be2d617370", "text": "OBJECTIVE\nParkinson's disease (PD) is a progressive neurological disorder characterised by a large number of motor and non-motor features that can impact on function to a variable degree. This review describes the clinical characteristics of PD with emphasis on those features that differentiate the disease from other parkinsonian disorders.\n\n\nMETHODS\nA MedLine search was performed to identify studies that assess the clinical characteristics of PD. Search terms included \"Parkinson's disease\", \"diagnosis\" and \"signs and symptoms\".\n\n\nRESULTS\nBecause there is no definitive test for the diagnosis of PD, the disease must be diagnosed based on clinical criteria. Rest tremor, bradykinesia, rigidity and loss of postural reflexes are generally considered the cardinal signs of PD. The presence and specific presentation of these features are used to differentiate PD from related parkinsonian disorders. Other clinical features include secondary motor symptoms (eg, hypomimia, dysarthria, dysphagia, sialorrhoea, micrographia, shuffling gait, festination, freezing, dystonia, glabellar reflexes), non-motor symptoms (eg, autonomic dysfunction, cognitive/neurobehavioral abnormalities, sleep disorders and sensory abnormalities such as anosmia, paresthesias and pain). Absence of rest tremor, early occurrence of gait difficulty, postural instability, dementia, hallucinations, and the presence of dysautonomia, ophthalmoparesis, ataxia and other atypical features, coupled with poor or no response to levodopa, suggest diagnoses other than PD.\n\n\nCONCLUSIONS\nA thorough understanding of the broad spectrum of clinical manifestations of PD is essential to the proper diagnosis of the disease. Genetic mutations or variants, neuroimaging abnormalities and other tests are potential biomarkers that may improve diagnosis and allow the identification of persons at risk.", "title": "" }, { "docid": "7220b2b3d79d91a2cf5547ca19b6aac6", "text": "Traditional methods of detecting and mapping utility poles are inefficient and costly because of the demand for visual interpretation with quality data sources or intense field inspection. The advent of deep learning for object detection provides an opportunity for detecting utility poles from side-view optical images. In this study, we proposed using a deep learning-based method for automatically mapping roadside utility poles with crossarms (UPCs) from Google Street View (GSV) images. The method combines the state-of-the-art DL object detection algorithm (i.e., the RetinaNet object detection algorithm) and a modified brute-force-based line-of-bearing (LOB, a LOB stands for the ray towards the location of the target [UPC at here] from the original location of the sensor [GSV mobile platform]) measurement method to estimate the locations of detected roadside UPCs from GSV. Experimental results indicate that: (1) both the average precision (AP) and the overall accuracy (OA) are around 0.78 when the intersection-over-union (IoU) threshold is greater than 0.3, based on the testing of 500 GSV images with a total number of 937 objects; and (2) around 2.6%, 47%, and 79% of estimated locations of utility poles are within 1 m, 5 m, and 10 m buffer zones, respectively, around the referenced locations of utility poles. In general, this study indicates that even in a complex background, most utility poles can be detected with the use of DL, and the LOB measurement method can estimate the locations of most UPCs.", "title": "" }, { "docid": "31d7a6da7093d50d0d5890cce4cb60cf", "text": "We introduce a novel Gaussian process based Bayesian model for asymmetric transfer learning. We adopt a two-layer feed-forward deep Gaussian process as the task learner of source and target domains. The first layer projects the data onto a separate non-linear manifold for each task. We perform knowledge transfer by projecting the target data also onto the source domain and linearly combining its representations on the source and target domain manifolds. Our approach achieves the state-of-the-art in a benchmark real-world image categorization task, and improves on it in cross-tissue tumor detection from histopathology tissue slide images.", "title": "" }, { "docid": "2798217f6e2d9194a9a30834ed9af47a", "text": "The main obstacle to transmit images in wireless sensor networks is the lack of an appropriate strategy for processing the large volume of data such as images. The high rate packets errors because of what numbers very high packets carrying the data of the captured images and the need for retransmission in case of errors, and more, the energy reserve and band bandwidth is insufficient to accomplish these tasks. This paper presents new effective technique called “Background subtraction” to compress, process and transmit the images in a wireless sensor network. The practical results show the effectiveness of this approach to make the image compression in the networks of wireless sensors achievable, reliable and efficient in terms of energy and the minimization of amount of image data.", "title": "" }, { "docid": "8026cfefc207b1d7386cae709f77bb9e", "text": "Regularization plays a central role in the analysis of modern data, where non-regularized fitting is likely to lead to over-fitted models, useless for both prediction and interpretation. We consider the design of incremental algorithms which follow paths of regularized solutions, as the regularization varies. These approaches often result in methods which are both efficient and highly flexible. We suggest a general path-following algorithm based on second-order approximations, prove that under mild conditions it remains “very close” to the path of optimal solutions and illustrate it with examples.", "title": "" }, { "docid": "6132281101d1e4eb4c021778a5d77624", "text": "Knowledge assessment instruments, or tests, are commonly created by faculty in classroom settings to measure student knowledge and skill. Another crucial role for assessment instruments is in gauging student learning in response to a computer science education research project, or intervention. In an increasingly interdisciplinary landscape, it is crucial to validate knowledge assessment instruments, yet developing and validating these tests for computer science poses substantial challenges. This paper presents a seven-step approach to designing, iteratively refining, and validating knowledge assessment instruments designed not to assign grades but to measure the efficacy or promise of novel interventions. We also detail how this seven-step process is being instantiated within a three-year project to implement a game-based learning environment for middle school computer science. This paper serves as a practical guide for adapting widely accepted psychometric practices to the development and validation of computer science knowledge assessments to support research.", "title": "" }, { "docid": "13f5dfb4571c1d32a39dbdd72946d0c8", "text": "We present MagicToon, an interactive modeling system with mobile augmented reality (AR) that allows children to build 3D cartoon scenes creatively from their own 2D cartoon drawings on paper. Our system consists of two major components: an automatic 2D-to-3D cartoon model creator and an interactive model editor to construct more complicated AR scenes. The model creator can generate textured 3D cartoon models according to 2D drawings automatically and overlay them on the real world, bringing life to flat cartoon drawings. With our interactive model editor, the user can perform several optional operations on 3D models such as copying and animating in AR context through a touchscreen of a handheld device. The user can also author more complicated AR scenes by placing multiple registered drawings simultaneously. The results of our user study have shown that our system is easier to use compared with traditional sketch-based modeling systems and can give more play to children's innovations compared with AR coloring books.", "title": "" }, { "docid": "89865dbb80fcb2d9c5d4d4fe4fe10b83", "text": "Elaborate efforts have been made to eliminate fake markings and refine <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula>-markings in the existing modified or improved Karp–Miller trees for various classes of unbounded Petri nets since the late 1980s. The main issues fundamentally are incurred due to the generation manners of the trees that prematurely introduce some potentially unbounded markings with <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula> symbols and keep their growth into new ones. Aiming at addressing them, this work presents a non-Karp–Miller tree called a lean reachability tree (LRT). First, a sufficient and necessary condition of the unbounded places and some reachability properties are established to reveal the features of unbounded nets. Then, we present an LRT generation algorithm with a sufficiently enabling condition (SEC). When generating a tree, SEC requires that the components of a covering node are not replaced by <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula> symbols, but continue to grow until any transition on an output path of an unbounded place has been branch-enabled at least once. In return, no fake marking is produced and no legal marking is lost during the tree generation. We prove that LRT can faithfully express by folding, instead of equivalently representing, the reachability set of an unbounded net. Also, some properties of LRT are examined and a sufficient condition of deadlock existence based on it is given. The case studies show that LRT outperforms the latest modified Karp–Miller trees in terms of size, expressiveness, and applicability. It can be applied to the analysis of the emerging discrete event systems with infinite states.", "title": "" }, { "docid": "5010761051983f5de1f18a11d477f185", "text": "Financial forecasting has been challenging problem due to its high non-linearity and high volatility. An Artificial Neural Network (ANN) can model flexible linear or non-linear relationship among variables. ANN can be configured to produce desired set of output based on set of given input. In this paper we attempt at analyzing the usefulness of artificial neural network for forecasting financial data series with use of different algorithms such as backpropagation, radial basis function etc. With their ability of adapting non-linear and chaotic patterns, ANN is the current technique being used which offers the ability of predicting financial data more accurately. \"A x-y-1 network topology is adopted because of x input variables in which variable y was determined by the number of hidden neurons during network selection with single output.\" Both x and y were changed.", "title": "" }, { "docid": "572867885a16afc0af6a8ed92632a2a7", "text": "We present an Efficient Log-based Troubleshooting(ELT) system for cloud computing infrastructures. ELT adopts a novel hybrid log mining approach that combines coarse-grained and fine-grained log features to achieve both high accuracy and low overhead. Moreover, ELT can automatically extract key log messages and perform invariant checking to greatly simplify the troubleshooting task for the system administrator. We have implemented a prototype of the ELT system and conducted an extensive experimental study using real management console logs of a production cloud system and a Hadoop cluster. Our experimental results show that ELT can achieve more efficient and powerful troubleshooting support than existing schemes. More importantly, ELT can find software bugs that cannot be detected by current cloud system management practice.", "title": "" }, { "docid": "400dce50037a38d19a3057382d9246b5", "text": "A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.", "title": "" }, { "docid": "7cbe504e03ab802389c48109ed1f1802", "text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.", "title": "" }, { "docid": "9fb9664eea84d3bc0f59f7c4714debc1", "text": "International research has shown that users are complacent when it comes to smartphone security behaviour. This is contradictory, as users perceive data stored on the `smart' devices to be private and worth protecting. Traditionally less attention is paid to human factors compared to technical security controls (such as firewalls and antivirus), but there is a crucial need to analyse human aspects as technology alone cannot deliver complete security solutions. Increasing a user's knowledge can improve compliance with good security practices, but for trainers and educators to create meaningful security awareness materials they must have a thorough understanding of users' existing behaviours, misconceptions and general attitude towards smartphone security.", "title": "" }, { "docid": "e5338d8c6c765165c65de5c4f390da2a", "text": "Rewards are sparse in the real world and most today’s reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself — thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward — making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory — which incorporates rich information about environment dynamics. This allows us to overcome the known “couch-potato” issues of prior work — when the agent finds a way to instantly gratify itself by exploiting actions which lead to unpredictable consequences. We test our approach in visually rich 3D environments in VizDoom and DMLab. In VizDoom, our agent learns to successfully navigate to a distant goal at least 2 times faster than the state-of-the-art curiosity method ICM. In DMLab, our agent generalizes well to new procedurally generated levels of the game — reaching the goal at least 2 times more frequently than ICM on test mazes with very sparse reward.", "title": "" }, { "docid": "3fd551696803695056dd759d8f172779", "text": "The aim of this research essay is to examine the structural nature of theory in Information Systems. Despite the impor tance of theory, questions relating to its form and structure are neglected in comparison with questions relating to episte mology. The essay addresses issues of causality, explanation, prediction, and generalization that underlie an understanding of theory. A taxonomy is proposed that classifies information systems theories with respect to the manner in which four central goals are addressed: analysis, explanation, predic tion, and prescription. Five interrelated types of theory are distinguished: (I) theory for analyzing, (2) theory for ex plaining, (3) theory for predicting, (4) theory for explaining and predicting, and (5) theory for design and action. Examples illustrate the nature of each theory type. The appli cability of the taxonomy is demonstrated by classifying a sample of journal articles. The paper contributes by showing that multiple views of theory exist and by exposing the assumptions underlying different viewpoints. In addition, it is suggested that the type of theory under development can influence the choice of an epistemological approach. Support Allen Lee was the accepting senior editor for this paper. M. Lynne Markus, Michael D. Myers, and Robert W. Zmud served as reviewers. is given for the legitimacy and value of each theory type. The building of integrated bodies of theory that encompass all theory types is advocated.", "title": "" }, { "docid": "cd36c3cb5949cf93a57d50b1c2f6129f", "text": "Airborne Wind Energy (AWE) is a renewable energy technology that uses wind power devices rather than traditional wind turbines that take advantage of the kinetic wind energy, and remain in the air due to aerodynamic forces. This article aims to compare the scientific literature with the patents on wind power with tethered airfoils, to obtain better insights into the literature of this area of knowledge. The method used in this study was a comparative bibliometric analysis, using the Web of Science and Derwent Innovations Index databases, and the Network Analysis Interface for Literature Review software and VosViewer. It was possible to verify the main authors, research centers and companies, countries and journals that publish on the subject; the most cited documents; the technological classes; and the networks of collaborations of this work. It was also possible to identify that researches on wind energy with tethered airfoils began their studies in the late 1970s with the first patent apparently dated from 1975 by the inventors Dai and Dai. The first scientific publication was in 1979 by authors Fletcher and Roberts, followed by Loyd in 1980. United States is the country that presented the highest number of patents and scientific papers. Both scientific papers and patents set up networks of collaboration; that is, important authors are interacting with others to establish cooperative partnerships.", "title": "" } ]
scidocsrr
85e6c5f0e19c39e9c300678c03812a6d
Bootstrapping Semantic Parsers from Conversations
[ { "docid": "7af729438f32c198d328a1ebc83d2eeb", "text": "The development of natural language interfaces (NLI's) for databases has been a challenging problem in natural language processing (NLP) since the 1970's. The need for NLI's has become more pronounced due to the widespread access to complex databases now available through the Internet. A challenging problem for empirical NLP is the automated acquisition of NLI's from training examples. We present a method for integrating statistical and relational learning techniques for this task which exploits the strength of both approaches. Experimental results from three different domains suggest that such an approach is more robust than a previous purely logicbased approach. 1 I n t r o d u c t i o n We use the term semantic parsing to refer to the process of mapping a natural language sentence to a structured meaning representation. One interesting application of semantic parsing is building natural language interfaces for online databases. The need for such applications is growing since when information is delivered through the Internet, most users do not know the underlying database access language. An example of such an interface that we have developed is shown in Figure 1. Traditional (rationalist) approaches to constructing database interfaces require an expert to hand-craft an appropriate semantic parser (Woods, 1970; Hendrix et al., 1978). However, such hand-crafted parsers are time consllming to develop and suffer from problems with robustness and incompleteness even for domain specific applications. Nevertheless, very little research in empirical NLP has explored the task of automatically acquiring such interfaces from annotated training examples. The only exceptions of which we are aware axe a statistical approach to mapping airline-information queries into SQL presented in (Miller et al., 1996), a probabilistic decision-tree method for the same task described in (Kuhn and De Mori, 1995), and an approach using relational learning (a.k.a. inductive logic programming, ILP) to learn a logic-based semantic parser described in (Zelle and Mooney, 1996). The existing empirical systems for this task employ either a purely logical or purely statistical approach. The former uses a deterministic parser, which can suffer from some of the same robustness problems as rationalist methods. The latter constructs a probabilistic grammar, which requires supplying a sytactic parse tree as well as a semantic representation for each training sentence, and requires hand-crafting a small set of contextual features on which to condition the parameters of the model. Combining relational and statistical approaches can overcome the need to supply parse-trees and hand-crafted features while retaining the robustness of statistical parsing. The current work is based on the CHILL logic-based parser-acquisition framework (Zelle and Mooney, 1996), retaining access to the complete parse state for making decisions, but building a probabilistic relational model that allows for statistical parsing2 O v e r v i e w o f t h e A p p r o a c h This section reviews our overall approach using an interface developed for a U.S. Geography database (Geoquery) as a sample application (ZeUe and Mooney, 1996) which is available on the Web (see hl:tp://gvg, c s . u t e z a s , edu/users/n~./geo .html). 2.1 S e m a n t i c R e p r e s e n t a t i o n First-order logic is used as a semantic representation language. CHILL has also been applied to a restaurant database in which the logical form resembles SQL, and is translated", "title": "" } ]
[ { "docid": "42e4f07ccb9673b32d7c2368cc013eac", "text": "This paper proposes a framework to aid video analysts in detecting suspicious activity within the tremendous amounts of video data that exists in today's world of omnipresent surveillance video. Ideas and techniques for closing the semantic gap between low-level machine readable features of video data and high-level events seen by a human observer are discussed. An evaluation of the event classification and diction technique is presented and future an experiment to refine this technique is proposed. These experiments are used as a lead to a discussion on the most optimal machine learning algorithm to learn the event representation scheme proposed in this paper.", "title": "" }, { "docid": "9f987cd94d103fb3d4496b7d95b6079f", "text": "In the world of sign language, and gestures, a lot of research work has been done over the past three decades. This has brought about a gradual transition from isolated to continuous, and static to dynamic gesture recognition for operations on a limited vocabulary. In present scenario, human machine interactive systems facilitate communication between the deaf, and hearing people in real world situations. In order to improve the accuracy of recognition, many researchers have deployed methods such as HMM, Artificial Neural Networks, and Kinect platform. Effective algorithms for segmentation, classification, pattern matching and recognition have evolved. The main purpose of this paper is to analyze these methods and to effectively compare them, which will enable the reader to reach an optimal solution. This creates both, challenges and opportunities for sign language recognition related research. KeywordsSign Language Recognition, Hidden Markov Model, Artificial Neural Network, Kinect Platform, Fuzzy Logic.", "title": "" }, { "docid": "ca2f6c435c4eac77d6eecaf8d6feea18", "text": "The fifth edition of the diagnostic and statistical manual of mental disorders (DSM-5) (APA in diagnostic and statistical manual of mental disorders, Author, Washington, 2013) has decided to merge the subtypes of pervasive developmental disorders into a single category of autism spectrum disorder (ASD) on the assumption that they cannot be reliably differentiated from one another. The purpose of this review is to analyze the basis of this assumption by examining the comparative studies between Asperger's disorder (AsD) and autistic disorder (AD), and between pervasive developmental disorder not otherwise specified (PDDNOS) and AD. In all, 125 studies compared AsD with AD. Of these, 30 studies concluded that AsD and AD were similar conditions while 95 studies found quantitative and qualitative differences between them. Likewise, 37 studies compared PDDNOS with AD. Nine of these concluded that PDDNOS did not differ significantly from AD while 28 reported quantitative and qualitative differences between them. Taken together, these findings do not support the conceptualization of AD, AsD and PDDNOS as a single category of ASD. Irrespective of the changes proposed by the DSM-5, future research and clinical practice will continue to find ways to meaningfully subtype the ASD.", "title": "" }, { "docid": "0aab0c0fa6a1b0f283478b390dece614", "text": "Hydrokinetic turbines can provide a source of electricity for remote areas located near a river or stream. The objective of this paper is to describe the design, simulation, build, and testing of a novel hydrokinetic turbine. The main components of the system are a permanent magnet synchronous generator (PMSG), a machined H-Darrieus rotor, an embedded controls system, and a cataraft. The design and construction of this device was conducted at the Oregon Institute of Technology in Wilsonville, Oregon.", "title": "" }, { "docid": "b1d1196f064bce5c1f6df75a6a5f8bb2", "text": "Studies of ad hoc wireless networks are a relatively new field gaining more popularity for various new applications. In these networks, the Medium Access Control (MAC) protocols are responsible for coordinating the access from active nodes. These protocols are of significant importance since the wireless communication channel is inherently prone to errors and unique problems such as the hidden-terminal problem, the exposedterminal problem, and signal fading effects. Although a lot of research has been conducted on MAC protocols, the various issues involved have mostly been presented in isolation of each other. We therefore make an attempt to present a comprehensive survey of major schemes, integrating various related issues and challenges with a view to providing a big-picture outlook to this vast area. We present a classification of MAC protocols and their brief description, based on their operating principles and underlying features. In conclusion, we present a brief summary of key ideas and a general direction for future work.", "title": "" }, { "docid": "05a07644824dd85eb2251a642c506d18", "text": "BACKGROUND\nWe present a method utilizing Healthcare Cost and Utilization Project (HCUP) dataset for predicting disease risk of individuals based on their medical diagnosis history. The presented methodology may be incorporated in a variety of applications such as risk management, tailored health communication and decision support systems in healthcare.\n\n\nMETHODS\nWe employed the National Inpatient Sample (NIS) data, which is publicly available through Healthcare Cost and Utilization Project (HCUP), to train random forest classifiers for disease prediction. Since the HCUP data is highly imbalanced, we employed an ensemble learning approach based on repeated random sub-sampling. This technique divides the training data into multiple sub-samples, while ensuring that each sub-sample is fully balanced. We compared the performance of support vector machine (SVM), bagging, boosting and RF to predict the risk of eight chronic diseases.\n\n\nRESULTS\nWe predicted eight disease categories. Overall, the RF ensemble learning method outperformed SVM, bagging and boosting in terms of the area under the receiver operating characteristic (ROC) curve (AUC). In addition, RF has the advantage of computing the importance of each variable in the classification process.\n\n\nCONCLUSIONS\nIn combining repeated random sub-sampling with RF, we were able to overcome the class imbalance problem and achieve promising results. Using the national HCUP data set, we predicted eight disease categories with an average AUC of 88.79%.", "title": "" }, { "docid": "2ec611426c0d111058bc38ad0f7e9e04", "text": "We revisit optical physical unclonable functions (PUFs), which were proposed by Pappu et al. in their seminal first publication on PUFs [40, 41]. The first part of the paper treats non-integrated optical PUFs. Their security against modeling attacks is analyzed, and we discuss new image transformations that maximize the PUF’s output entropy while possessing similar error correction capacities as previous approaches [40, 41]. Furthermore, the influence of using more than one laser beam, varying laser diameters, and smaller scatterer sizes is systematically studied. Our findings enable the simple enhancement of an optical PUF’s security without additional hardware costs. Next, we discuss the novel application of non-integrated optical PUFs as so-called “Certifiable PUFs”. The latter are useful to achieve practical security in advanced PUF-protocols, as recently observed by Rührmair and van Dijk at Oakland 2013 [48]. Our technique is the first mechanism for Certifiable PUFs in the literature, answering an open problem posed in [48]. In the second part of the paper, we turn to integrated optical PUFs. We build the first prototype of an integrated optical PUF that functions without moving components and investigate its security. We show that these PUFs can surprisingly be attacked by machine learning techniques if the employed scattering structure is linear, and if the raw interference images of the PUF are available to the adversary. Our result enforces the use of non-linear scattering structures within integrated PUFs. The quest for suitable materials Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 200X ACM X-XXXXX-XX-X/XX/XX ...$10.00. is identified as a central, but currently open research problem. Our work makes intensive use of two prototypes of optical PUFs. The presented integratable optical PUF prototype is, to our knowledge, the first of its kind in the literature.", "title": "" }, { "docid": "415c43b39543f2889eca11cbc3669784", "text": "The fabrication of electronic devices based on organic materials, known as ’printed electronics’, is an emerging technology due to its unprecedented advantages involving fl exibility, light weight, and portability, which will ultimately lead to future ubiquitous applications. [ 1 ] The solution processability of semiconducting and metallic polymers enables the cost-effective fabrication of optoelectronic devices via high-throughput printing techniques. [ 2 ] These techniques require high-performance fl exible and transparent electrodes (FTEs) fabricated on plastic substrates, but currently, they depend on indium tin oxide (ITO) coated on plastic substrates. However, its intrinsic mechanical brittleness and inferior physical properties arising from lowtemperature ( T ) processing below the melting T of the plastic substrates (i.e., typically below 150 °C) have increased the demand for alternative FTE materials. [ 3 ]", "title": "" }, { "docid": "3d766bce113940f70ede89e505d3d40d", "text": "Early diagnosis is a prerequisite for a successful treatment of complex regional pain syndrome (CRPS). In order to describe neurological symptoms which characterize CRPS, we evaluated 145 patients prospectively. Two-thirds of these were women, the mean age at time of investigation was 50.4 years. CRPS followed limb trauma, surgery and nerve lesion. Employing the current IASP criteria 122 patients were classified as CRPS I and 23 as CRPS II. All patients were assessed clinically pain was quantified using the McGill pain questionnaire, skin temperature was measured by an infrared thermometer and a subgroup of 57 patients was retested in order to determine thermal thresholds (QST). Of our patients 42% reported stressful life events in a close relationship to the onset of CRPS and 41% had a history of chronic pain before CRPS. The latter group of patients gave a higher rating of CRPS pain (P<0.05). The major symptoms were pain at rest in 77% and hyperalgesia in 94%. Typical pain was deep in the limb having a tearing character. Patients getting physical therapy had significantly less pain than those without (P<0.04). Autonomic signs were frequent (98%) and often changed with the duration of CRPS. Skin temperature was warmer in acute and colder in chronic stages (P<0.001). Likewise edema had a higher incidence in acute stages (P<0.001). We found no correlation between pain and autonomic dysfunction. Motor dysfunction (present in 97%) included weakness, tremor, exaggerated tendon reflexes, dystonia or myoclonic jerks. QST revealed increased warm perception thresholds (P<0.02) and decreased cold pain thresholds (P<0.03) of the affected limb. The detailed knowledge of clinical features of CRPS could help physicians early to recognize the disease and thus to improve therapy outcome.", "title": "" }, { "docid": "1c03f51212eca905657ba1361173c055", "text": "A miscellany of new strategies, experimental techniques and theoretical approaches are emerging in the ongoing battle against cancer. Nevertheless, as new, ground-breaking discoveries relating to many and diverse areas of cancer research are made, scientists often have recourse to mathematical modelling in order to elucidate and interpret these experimental findings. Indeed, experimentalists and clinicians alike are becoming increasingly aware of the possibilities afforded by mathematical modelling, recognising that current medical techniques and experimental approaches are often unable to distinguish between various possible mechanisms underlying important aspects of tumour development. This short treatise presents a concise history of the study of solid tumour growth, illustrating the development of mathematical approaches from the early decades of the twentieth century to the present time. Most importantly these mathematical investigations are interwoven with the associated experimental work, showing the crucial relationship between experimental and theoretical approaches, which together have moulded our understanding of tumour growth and contributed to current anti-cancer treatments. Thus, a selection of mathematical publications, including the influential theoretical studies by Burton, Greenspan, Liotta et al., McElwain and co-workers, Adam and Maggelakis, and Byrne and co-workers are juxtaposed with the seminal experimental findings of Gray et al. on oxygenation and radio-sensitivity, Folkman on angiogenesis, Dorie et al. on cell migration and a wide variety of other crucial discoveries. In this way the development of this field of research through the interactions of these different approaches is illuminated, demonstrating the origins of our current understanding of the disease.", "title": "" }, { "docid": "f4c4f721fcbda6a740e45c8052977487", "text": "We propose a method for improving the unconstrained segmentation of speech into phoneme-like units using deep neural networks. The proposed approach is not dependent on acoustic models or forced alignment, but operates using the acoustic features directly. Previous solutions of this type were plagued by the tendency to hypothesise additional incorrect phoneme boundaries near the phoneme transitions. We show that the application of deep neural networks is able to reduce this over-segmentation substantially, and achieve improved segmentation accuracies. Furthermore, we find that generative pre-training offers an additional benefit.", "title": "" }, { "docid": "3156539889e42e1796ae2f280d0bbaf5", "text": "ETL process (Extracting-Transforming-Loading) is responsible for (E)xtracting data from heterogeneous sources, (T)ransforming and finally (L)oading them into a data warehouse (DW). Nowadays, Internet and Web 2.0 are generating data at an increasing rate, and therefore put the information systems (IS) face to the challenge of big data. Data integration systems and ETL, in particular, should be revisited and adapted and the well-known solution is based on the data distribution and the parallel/distributed processing. Among all the dimensions defining the complexity of the big data, we focus in this paper on its excessive \"volume\" in order to ensure good performance for ETL processes. In this context, we propose an original approach called Big-ETL (ETL Approach for Big Data) in which we define ETL functionalities that can be run easily on a cluster of computers with MapReduce (MR) paradigm. Big-ETL allows, thereby, parallelizing/distributing ETL at two levels: (i) the ETL process level (coarse granularity level), and (ii) the functionality level (fine level); this allows improving further the ETL performance.", "title": "" }, { "docid": "b46a967ad85c5b64c0f14f703d385b24", "text": "Bitcoin has shown great utility around the world with the drastic increase in its value and global consensus method of proof-of-work (POW). Over the years after the revolution in the digital transaction space, we are looking at major scalability issue with old POW consensus method and bitcoin peak limit of processing only 7 transactions per second. With more companies trying to adopt blockchain to modify their existing systems, blockchain working on old consensus methods and with scalability issues can't deliver the optimal solution. Specifically, with new trends like smart contracts and DAPPs, much better performance is needed to support any actual business applications. Such requirements are pushing the new platforms away from old methods of consensus and adoption of off-chain solutions. In this paper, we discuss various scalability issues with the Bitcoin and Ethereum blockchain and recent proposals like the lighting protocol, sharding, super quadratic sharding, DPoS to solve these issues. We also draw the comparison between these proposals on their ability to overcome scalability limits and highlighting major problems in these approaches. In the end, we propose our solution to suffice the scalability issue and conclude with the fact that with better scalability, blockchain has the potential to outrageously support varied domains of the industry.", "title": "" }, { "docid": "6668f97653611d1ffa3bbf603e3a3418", "text": "Online reviews play a crucial role in helping consumers evaluate and compare products and services. This critical importance of reviews also incentivizes fraudsters (or spammers) to write fake or spam reviews to secretly promote or demote some target products and services. Existing approaches to detecting spam reviews and reviewers employed review contents, reviewer behaviors, star rating patterns, and reviewer-product networks for detection. In this research, we further discovered that reviewers’ posting rates (number of reviews written in a period of time) also follow an interesting distribution pattern, which has not been reported before. That is, their posting rates are bimodal. Multiple spammers also tend to collectively and actively post reviews to the same set of products within a short time frame, which we call co-bursting. Furthermore, we found some other interesting patterns in individual reviewers’ temporal dynamics and their co-bursting behaviors with other reviewers. Inspired by these findings, we first propose a two-mode Labeled Hidden Markov Model to model spamming using only individual reviewers’ review posting times. We then extend it to the Coupled Hidden Markov Model to capture both reviewer posting behaviors and co-bursting signals. Our experiments show that the proposed model significantly outperforms state-of-the-art baselines in identifying individual spammers. Furthermore, we propose a cobursting network based on co-bursting relations, which helps detect groups of spammers more effectively than existing approaches.", "title": "" }, { "docid": "28e0bd104c8654ed9ad007c66bae0461", "text": "Today, journalist, information analyst, and everyday news consumers are tasked with discerning and fact-checking the news. This task has became complex due to the ever-growing number of news sources and the mixed tactics of maliciously false sources. To mitigate these problems, we introduce the The News Landscape (NELA) Toolkit: an open source toolkit for the systematic exploration of the news landscape. NELA allows users to explore the credibility of news articles using well-studied content-based markers of reliability and bias, as well as, filter and sort through article predictions based on the users own needs. In addition, NELA allows users to visualize the media landscape at different time slices using a variety of features computed at the source level. NELA is built with a modular, pipeline design, to allow researchers to add new tools to the toolkit with ease. Our demo is an early transition of automated news credibility research to assist human fact-checking efforts and increase the understanding of the news ecosystem as a whole. To use this tool, go to http://nelatoolkit.science", "title": "" }, { "docid": "5b0530f94f476754034c92292e02b390", "text": "Many seemingly simple questions that individual users face in their daily lives may actually require substantial number of computing resources to identify the right answers. For example, a user may want to determine the right thermostat settings for different rooms of a house based on a tolerance range such that the energy consumption and costs can be maximally reduced while still offering comfortable temperatures in the house. Such answers can be determined through simulations. However, some simulation models as in this example are stochastic, which require the execution of a large number of simulation tasks and aggregation of results to ascertain if the outcomes lie within specified confidence intervals. Some other simulation models, such as the study of traffic conditions using simulations may need multiple instances to be executed for a number of different parameters. Cloud computing has opened up new avenues for individuals and organizations Shashank Shekhar shashank.shekhar@vanderbilt.edu Hamzah Abdel-Aziz hamzah.abdelaziz@vanderbilt.edu Michael Walker michael.a.walker.1@vanderbilt.edu Faruk Caglar faruk.caglar@vanderbilt.edu Aniruddha Gokhale a.gokhale@vanderbilt.edu Xenofon Koutsoukos xenonfon.koutsoukos@vanderbilt.edu 1 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA with limited resources to obtain answers to problems that hitherto required expensive and computationally-intensive resources. This paper presents SIMaaS, which is a cloudbased Simulation-as-a-Service to address these challenges. We demonstrate how lightweight solutions using Linux containers (e.g., Docker) are better suited to support such services instead of heavyweight hypervisor-based solutions, which are shown to incur substantial overhead in provisioning virtual machines on-demand. Empirical results validating our claims are presented in the context of two", "title": "" }, { "docid": "aa6156d21ddb525fca036040aeb3db37", "text": "The rapid development of the Internet-of-Things requires hardware that is both energy-efficient and flexible, and an ultra-low-power Field-Programmable-Gate-Array (FPGA) is a very promising solution. This paper presents a near/sub-threshold FPGA with low-swing global interconnect, folded switch box (SB), per-path voltage scaling, and power-gating. A fully programmable 512-look-up-table FPGA chip is fabricated in 130nm CMOS. When implementing a 4bit-adder, the measured energy of the proposed FPGA is 15% less than the normalized energy of the state-of-the-art. When implementing fifteen selected low-power applications, the estimated energy of the proposed FPGA is on average 75x lower than Microsemi IGLOO.", "title": "" }, { "docid": "510439267c11c53b31dcf0b1c40e331b", "text": "Spatial multicriteria decision problems are decision problems where one needs to take multiple conflicting criteria as well as geographical knowledge into account. In such a context, exploratory spatial analysis is known to provide tools to visualize as much data as possible on maps but does not integrate multicriteria aspects. Also, none of the tools provided by multicriteria analysis were initially destined to be used in a geographical context.In this paper, we propose an application of the PROMETHEE and GAIA ranking methods to Geographical Information Systems (GIS). The aim is to help decision makers obtain rankings of geographical entities and understand why such rankings have been obtained. To do that, we make use of the visual approach of the GAIA method and adapt it to display the results on geographical maps. This approach is then extended to cover several weaknesses of the adaptation. Finally, it is applied to a study of the region of Brussels as well as an evaluation of the Human Development Index (HDI) in Europe.", "title": "" }, { "docid": "4a900831ccc5cd7035fa06f72768a86b", "text": "A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision! In this paper, using a Task & Talk reference game between two agents as a testbed, we present a sequence of ‘negative’ results culminating in a ‘positive’ one – showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge ‘naturally’, despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.", "title": "" }, { "docid": "ee378b32ee744f0377a3723ec00f4313", "text": "In this article, we present some extensions of the rough set approach and we outline a challenge for the rough set based research. 2006 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
5dd5901e6e15468d1281917e9f3c7cca
Security and privacy in mobile social networks: challenges and solutions
[ { "docid": "93bad64439be375200cce65a37c6b8c6", "text": "The mobile social network (MSN) combines techniques in social science and wireless communications for mobile networking. The MSN can be considered as a system which provides a variety of data delivery services involving the social relationship among mobile users. This paper presents a comprehensive survey on the MSN specifically from the perspectives of applications, network architectures, and protocol design issues. First, major applications of the MSN are reviewed. Next, different architectures of the MSN are presented. Each of these different architectures supports different data delivery scenarios. The unique characteristics of social relationship in MSN give rise to different protocol design issues. These research issues (e.g., community detection, mobility, content distribution, content sharing protocols, and privacy) and the related approaches to address data delivery in the MSN are described. At the end, several important research directions are outlined.", "title": "" } ]
[ { "docid": "d42ed4f231d51cacaf1f42de1c723c31", "text": "A stepped circular waveguide dual-mode (SCWDM) filter is fully investigated in this paper, from its basic characteristic to design formula. As compared to a conventional circular waveguide dual-mode (CWDM) filter, it provides more freedoms for shifting and suppressing the spurious modes in a wide frequency band. This useful attribute can be used for a broadband waveguide contiguous output multiplexer (OMUX) in satellite payloads. The scaling factor for relating coupling value M to its corresponding impedance inverter K in a stepped cavity is derived for full-wave EM design. To validate the design technique, four design examples are presented. One challenging example is a wideband 17-channel Ku-band contiguous multiplexer with two SCWDM channel filters. A triplexer hardware covering the same included bandwidth is also designed and measured. The measurement results show excellent agreement with those of the theoretical EM designs, justifying the effectiveness of full-wave EM modal analysis. Comparing to the best possible design of conventional CWDM filters, at least 30% more spurious-free range in both Ku-band and C-band can be achieved by using SCWDM filters.", "title": "" }, { "docid": "a7665a6c0955b5d4ca2c4c8cdc183974", "text": "Deep learning has recently helped AI systems to achieve human-level performance in several domains, including speech recognition, object classification, and playing several types of games. The major benefit of deep learning is that it enables end-to-end learning of representations of the data on several levels of abstraction. However, the overall network architecture and the learning algorithms’ sensitive hyperparameters still need to be set manually by human experts. In this talk, I will discuss extensions of Bayesian optimization for handling this problem effectively, thereby paving the way to fully automated end-to-end learning. I will focus on speeding up Bayesian optimization by reasoning over data subsets and initial learning curves, sometimes resulting in 100-fold speedups in finding good hyperparameter settings. I will also show competition-winning practical systems for automated machine learning (AutoML) and briefly show related applications to the end-to-end optimization of algorithms for solving hard combinatorial problems. Bio. Frank Hutter is an Emmy Noether Research Group Lead (eq. Asst. Prof.) at the Computer Science Department of the University of Freiburg (Germany). He received his PhD from the University of British Columbia (2009). Frank’s main research interests span artificial intelligence, machine learning, combinatorial optimization, and automated algorithm design. He received a doctoral dissertation award from the Canadian Artificial Intelligence Association and, with his coauthors, several best paper awards (including from JAIR and IJCAI) and prizes in international competitions on machine learning, SAT solving, and AI planning. In 2016 he received an ERC Starting Grant for a project on automating deep learning based on Bayesian optimization, Bayesian neural networks, and deep reinforcement learning. Frontiers in Recurrent Neural Network Research", "title": "" }, { "docid": "1bf796a1b7e802076e25b9d0742a7f91", "text": "Modern computing devices and user interfaces have necessitated highly interactive querying. Some of these interfaces issue a large number of dynamically changing and continuous queries to the backend. In others, users expect to inspect results during the query formulation process, in order to guide or help them towards specifying a full-fledged query. Thus, users end up issuing a fast-changing workload to the underlying database. In such situations, the user's query intent can be thought of as being in flux. In this paper, we show that the traditional query execution engines are not well-suited for this new class of highly interactive workloads. We propose a novel model to interpret the variability of likely queries in a workload. We implemented a cyclic scan-based approach to process queries from such workloads in an efficient and practical manner while reducing the overall system load. We evaluate and compare our methods with traditional systems and demonstrate the scalability of our approach, enabling thousands of queries to run simultaneously within interactive response times given low memory and CPU requirements.", "title": "" }, { "docid": "55ff1f320953b9c9405541c8afd7841a", "text": "Uber used a disruptive business model driven by dig ital technology to trigger a ride-sharing revolution. The institutional sources of the compa ny’s platform ecosystem architecture were analyzed to explain this revolutionary change. Both an empirical analysis of a co-existing develop ment trajectory with taxis and institutional enablers that helped to create Uber’s platform ecos ystem were analyzed. The analysis identified a correspondence with the “ wo-faced” nature of ICT that nurtures uncaptured GDP. This two-faced nature of ICT can be attributed to a virtuous cycle of decline in prices and an increase in the number of trips. We show that this cycle can be attributed to a self -propagating function that plays a vital role in the spinoff from traditional co-evolution to new co-evolution. Furthermore, we use the three mega-trends of ICT advancement, paradigm chan ge d a shift in people’s preferences to explain the secret of Uber’s system success. All these noteworthy elements seem essential to a w ell-functioning platform ecosystem architecture, not only in transportation but also f or other business institutions.", "title": "" }, { "docid": "97fc340e06c98a845048208a3591463d", "text": "Spatial orientation strongly relies on visual and whole-body information available while moving through space. As virtual environments allow to isolate the contribution of visual information from the contribution of whole-body information, they are an attractive methodological means to investigate the role of visual information for spatial orientation. Using an elementary spatial orientation task (triangle completion) in a simple virtual environment we studied the effect of amount of simultaneously available visual information (geometric field of view) and triangle layout on the integration and uptake of directional (turn) and distance information under visual simulation conditions. While the amount of simultaneously available visual information had no effect on homing errors, triangle layout substantially affected homing errors. Further analysis of the observed homing errors by means of an Encoding Error Model revealed that subjects navigating under visual simulation conditions had problems in accurately taking up and representing directional (turn) information, an effect which was not observed in experiments reported in the literature from similar whole-body conditions. Implications and prospects for investigating spatial orientation by means of virtual environments are discussed considering the present experiments as well as other work on spatial cognition using virtual environments.", "title": "" }, { "docid": "43beba8ec2a324546bce095e9c1d9f0c", "text": "Scenario-based specifications such as Message Sequence Charts (MSCs) are useful as part of a requirements specification. A scenario is a partial story, describing how system components, the environment, and users work concurrently and interact in order to provide system level functionality. Scenarios need to be combined to provide a more complete description of system behavior. Consequently, scenario synthesis is central to the effective use of scenario descriptions. How should a set of scenarios be interpreted? How do they relate to one another? What is the underlying semantics? What assumptions are made when synthesizing behavior models from multiple scenarios? In this paper, we present an approach to scenario synthesis based on a clear sound semantics, which can support and integrate many of the existing approaches to scenario synthesis. The contributions of the paper are threefold. We first define an MSC language with sound abstract semantics in terms of labeled transition systems and parallel composition. The language integrates existing approaches based on scenario composition by using high-level MSCs (hMSCs) and those based on state identification by introducing explicit component state labeling. This combination allows stakeholders to break up scenario specifications into manageable parts and reuse scenarios using hMCSs; it also allows them to introduce additional domainspecific information and general assumptions explicitly into the scenario specification using state labels. Second, we provide a sound synthesis algorithm which translates scenarios into a behavioral specification in the form of Finite Sequential Processes. This specification can be analyzed with the Labeled Transition System Analyzer using model checking and animation. Finally, we demonstrate how many of the assumptions embedded in existing synthesis approaches can be made explicit and modeled in our approach. Thus, we provide the basis for a common approach to scenario-based specification, synthesis, and analysis.", "title": "" }, { "docid": "c91057dab0cd143042e180e5e432a4fa", "text": "The topic of this paper is a Genetic Algorithm solution to the Vehicle Routing Problem with Time Windows, a variant of one of the most common problems in contemporary operations research. The paper will introduce the problem starting with more general Traveling Salesman and Vehicle Routing problems and present some of the prevailing strategies for solving them, focusing on Genetic Algorithms. At the end, it will summarize the Genetic Algorithm solution proposed by K.Q. Zhu which was used in the programming part of the project.", "title": "" }, { "docid": "f1977e5f8fbc0df4df0ac6bf1715c254", "text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.", "title": "" }, { "docid": "776ddd7f1330dba24ed49d32bf4969c5", "text": "BACKGROUND\nInternet sources are becoming increasingly important in seeking health information, such that they may have a significant effect on health care decisions and outcomes. Hence, given the wide range of different sources of Web-based health information (WHI) from different organizations and individuals, it is important to understand how information seekers evaluate and select the sources that they use, and more specifically, how they assess their credibility and trustworthiness.\n\n\nOBJECTIVE\nThe aim of this study was to review empirical studies on trust and credibility in the use of WHI. The article seeks to present a profile of the research conducted on trust and credibility in WHI seeking, to identify the factors that impact judgments of trustworthiness and credibility, and to explore the role of demographic factors affecting trust formation. On this basis, it aimed to identify the gaps in current knowledge and to propose an agenda for future research.\n\n\nMETHODS\nA systematic literature review was conducted. Searches were conducted using a variety of combinations of the terms WHI, trust, credibility, and their variants in four multi-disciplinary and four health-oriented databases. Articles selected were published in English from 2000 onwards; this process generated 3827 unique records. After the application of the exclusion criteria, 73 were analyzed fully.\n\n\nRESULTS\nInterest in this topic has persisted over the last 15 years, with articles being published in medicine, social science, and computer science and originating mostly from the United States and the United Kingdom. Documents in the final dataset fell into 3 categories: (1) those using trust or credibility as a dependent variable, (2) those using trust or credibility as an independent variable, and (3) studies of the demographic factors that influence the role of trust or credibility in WHI seeking. There is a consensus that website design, clear layout, interactive features, and the authority of the owner have a positive effect on trust or credibility, whereas advertising has a negative effect. With regard to content features, authority of the author, ease of use, and content have a positive effect on trust or credibility formation. Demographic factors influencing trust formation are age, gender, and perceived health status.\n\n\nCONCLUSIONS\nThere is considerable scope for further research. This includes increased clarity of the interaction between the variables associated with health information seeking, increased consistency on the measurement of trust and credibility, a greater focus on specific WHI sources, and enhanced understanding of the impact of demographic variables on trust and credibility judgments.", "title": "" }, { "docid": "d03d831ceddf508d58298a45c9373ccd", "text": "Recent BIO-tagging-based neural semantic role labeling models are very high performing, but assume gold predicates as part of the input and cannot incorporate span-level features. We propose an endto-end approach for jointly predicting all predicates, arguments spans, and the relations between them. The model makes independent decisions about what relationship, if any, holds between every possible word-span pair, and learns contextualized span representations that provide rich, shared input features for each decision. Experiments demonstrate that this approach sets a new state of the art on PropBank SRL without gold predicates.1", "title": "" }, { "docid": "4e93ce8e5a6175dd558954e560d7ddc2", "text": "This paper presents a new type of narrow band filter with good electrical performance and manufacturing flexibility, based on the newly introduced groove gap waveguide technology. The designed third and fifth-order filters work at Ku band with 1% fractional bandwidth. These filter structures are manufactured with an allowable gap between two metal blocks, in such a way that there is no requirement for electrical contact and alignment between the blocks. This is a major manufacturing advantage compared to normal rectangular waveguide filters. The measured results of the manufactured filters show reasonably good agreement with the full-wave simulated results, without any tuning or adjustments.", "title": "" }, { "docid": "cb752bb215cde34e935d3a0d3ba3c98c", "text": "Low earth orbit (LEO) satellite constellations could play an important role in future mobile communication networks due to their characteristics, such as global coverage and low propagation delays. However, because of the non-stationarity of the satellites, a call may be subjected to handovers, which can be cell or satellite handovers. Quite many techniques have been proposed in the literature dealing with the cell handover issue. In this paper, a satellite handover procedure is proposed, that investigates and exploits the partial satellite diversity (namely, the existing common coverage area between contiguous satellites) in order to provide an efficient handover strategy, based always on a tradeoff of the blocking and forced termination probabilities for a fair treatment of new and handover calls. Three different criteria were examined for the selection of a satellite. Each one of them could be applied either to new or handover calls, therefore we investigated nine different service schemes. A simulation tool was implemented in order to compare the different service schemes and simulation results are presented at the end of the paper.", "title": "" }, { "docid": "a2f91e55b5096b86f6fa92e701c62898", "text": "The main question we address in this paper is how to use purely textual description of categories with no training images to learn visual classifiers for these categories. We propose an approach for zero-shot learning of object categories where the description of unseen categories comes in the form of typical text such as an encyclopedia entry, without the need to explicitly defined attributes. We propose and investigate two baseline formulations, based on regression and domain adaptation. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the classifier parameters for new classes. We applied the proposed approach on two fine-grained categorization datasets, and the results indicate successful classifier prediction.", "title": "" }, { "docid": "1159d85ed21049f3fb70db58307eafff", "text": "Cannabis sativa L. is an annual dioecious plant from Central Asia. Cannabinoids, flavonoids, stilbenoids, terpenoids, alkaloids and lignans are some of the secondary metabolites present in C. sativa. Earlier reviews were focused on isolation and identification of more than 480 chemical compounds; this review deals with the biosynthesis of the secondary metabolites present in this plant. Cannabinoid biosynthesis and some closely related pathways that involve the same precursors are disscused.", "title": "" }, { "docid": "bab7a21f903157fcd0d3e70da4e7261a", "text": "The clinical, electrophysiological and morphological findings (light and electron microscopy of the sural nerve and gastrocnemius muscle) are reported in an unusual case of Guillain-Barré polyneuropathy with an association of muscle hypertrophy and a syndrome of continuous motor unit activity. Fasciculation, muscle stiffness, cramps, myokymia, impaired muscle relaxation and percussion myotonia, with their electromyographic accompaniments, were abolished by peripheral nerve blocking, carbamazepine, valproic acid or prednisone therapy. Muscle hypertrophy, which was confirmed by morphometric data, diminished 2 months after the beginning of prednisone therapy. Electrophysiological and nerve biopsy findings revealed a mixed process of axonal degeneration and segmental demyelination. Muscle biopsy specimen showed a marked predominance and hypertrophy of type-I fibres and atrophy, especially of type-II fibres.", "title": "" }, { "docid": "f9dc4c6277ad29a757dedf26f3572dce", "text": "The transdisciplinary research project Virtopsy is dedicated to implementing modern imaging techniques into forensic medicine and pathology in order to augment current examination techniques or even to offer alternative methods. Our project relies on three pillars: three-dimensional (3D) surface scanning for the documentation of body surfaces, and both multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) to visualise the internal body. Three-dimensional surface scanning has delivered remarkable results in the past in the 3D documentation of patterned injuries and of objects of forensic interest as well as whole crime scenes. Imaging of the interior of corpses is performed using MSCT and/or MRI. MRI, in addition, is also well suited to the examination of surviving victims of assault, especially choking, and helps visualise internal injuries not seen at external examination of the victim. Apart from the accuracy and three-dimensionality that conventional documentations lack, these techniques allow for the re-examination of the corpse and the crime scene even decades later, after burial of the corpse and liberation of the crime scene. We believe that this virtual, non-invasive or minimally invasive approach will improve forensic medicine in the near future.", "title": "" }, { "docid": "b513d1cbf3b2f649afcea4d0ab6784ac", "text": "RoboSimian is a quadruped robot inspired by an ape-like morphology, with four symmetric limbs that provide a large dexterous workspace and high torque output capabilities. Advantages of using RoboSimian for rough terrain locomotion include (1) its large, stable base of support, and (2) existence of redundant kinematic solutions, toward avoiding collisions with complex terrain obstacles. However, these same advantages provide significant challenges in experimental implementation of walking gaits. Specifically: (1) a wide support base results in high variability of required body pose and foothold heights, in particular when compared with planning for humanoid robots, (2) the long limbs on RoboSimian have a strong proclivity for self-collision and terrain collision, requiring particular care in trajectory planning, and (3) having rear limbs outside the field of view requires adequate perception with respect to a world map. In our results, we present a tractable means of planning statically stable and collision-free gaits, which combines practical heuristics for kinematics with traditional randomized (RRT) search algorithms. In planning experiments, our method outperforms other tested methodologies. Finally, real-world testing indicates that perception limitations provide the greatest challenge in real-world implementation.", "title": "" }, { "docid": "5873204bba0bd16262274d4961d3d5f9", "text": "The analysis of the adaptive behaviour of many different kinds of systems such as humans, animals and machines, requires more general ways of assessing their cognitive abilities. This need is strengthened by increasingly more tasks being analysed for and completed by a wider diversity of systems, including swarms and hybrids. The notion of universal test has recently emerged in the context of machine intelligence evaluation as a way to define and use the same cognitive test for a variety of systems, using some principled tasks and adapting the interface to each particular subject. However, how far can universal tests be taken? This paper analyses this question in terms of subjects, environments, space-time resolution, rewards and interfaces. This leads to a number of findings, insights and caveats, according to several levels where universal tests may be progressively more difficult to conceive, implement and administer. One of the most significant contributions is given by the realisation that more universal tests are defined as maximisations of less universal tests for a variety of configurations. This means that universal tests must be necessarily adaptive.", "title": "" }, { "docid": "1bf2f9e48a67842412a3b32bb2dd3434", "text": "Since Paul Broca, the relationship between mind and brain has been the central preoccupation of cognitive neuroscience. In the 19th century, recognition that mental faculties might be understood by observations of individuals with brain damage led to vigorous debates about the properties of mind. By the end of the First World War, neurologists had outlined basic frameworks for the neural organization of language, perception, and motor cognition. Geschwind revived these frameworks in the 1960s and by the 1980s, lesion studies had incorporated methods from experimental psychology, models from cognitive science, formalities from computational approaches, and early developments in structural brain imaging. Around the same time, functional neuroimaging entered the scene. Early xenon probes evolved to the present-day wonders of BOLD and perfusion imaging. In a quick two decades, driven by these technical advances, centers for cognitive neuroscience now dot the landscape, journals such as this one are thriving, and the annual meeting of the Society for Cognitive Neuroscience is overflowing. In these heady times, a group of young cognitive neuroscientists training at a center in which human lesion studies and functional neuroimaging are pursued with similar vigor inquire about the relative impact of these two methods on the field. Fellows and colleagues, in their article titled ‘‘Method matters: An empirical study of impact on cognitive neuroscience,’’ point out that the nature of the evidence derived from the two methods are different. Importantly, they have complementary strengths and weaknesses. A critical difference highlighted in their article is that functional imaging by necessity provides correlational data, whereas lesion studies can support necessity claims for a specific brain region in a particular function. The authors hypothesize that despite the obvious growth of functional imaging in the last decade or so, lesion studies would have a disproportionate impact on cognitive neuroscience because they offer the possibility of establishing a causal role for structure in behavior in a way that is difficult to establish using functional imaging. The authors did not confirm this hypothesis. Using bibliometric methods, they found that functional imaging studies were cited three times as often as lesion studies, in large part because imaging studies were more likely to be published in high-impact journals. Given the complementary nature of the evidence from both methods, they anticipated extensive cross-method references. However, they found a within-method bias to citations generally, and, furthermore, functional imaging articles cited lesion studies considerably less often than the converse. To confirm the trends indicated by Fellows and colleagues, I looked at the distribution of cognitive neuroscience methods in the abstracts accepted for the 2005 Annual Meeting of the Cognitive Neuroscience Society (see Figure 1). Imaging studies composed over a third of all abstracts, followed by electrophysiological studies, the bulk of which were event-related potential (ERP) and magnetoencephalogram (MEG) studies. Studies that used patient populations composed 16% of the abstracts. The patient studies were almost evenly split between those focused on understanding a disease (47%), such as autism or schizophrenia, and those in which structure–function relationships were a consideration (53%). These observations do not speak of the final impact of these studies, but they do point out the relative lack of patient-based studies, particularly those addressing basic cognitive neuroscience questions. Fellows and colleagues pose the following question: Despite the greater ‘‘in-principle’’ inferential strength of lesion than functional imaging studies, why in practice do they have less impact on the field? They suggest that sociologic and practical considerations, rather than scientific merit, might be at play. Here, I offer my speculations on the factors that contribute to the relative impact of these methods. These speculations are not intended to be comprehensive. Rather they are intended to begin conversations in response to the question posed by Fellows and colleagues. In my view, the disproportionate impact of functional imaging compared to lesion studies is driven by three factors: the appeal of novelty and technology, by ease of access to neural data, and, in a subtle way, to the pragmatics of hypothesis testing. First, novelty is intrinsically appealing. As a clinician, I often encounter patients requesting the latest medications, even when they are more expensive and not demonstrably better than older ones. As scions of the enlightenment, many of us believe in progress, and that things newer are generally things better. Lesion studies have been around for a century and a half. Any advances made now are likely to be incremental. By contrast, functional imaging is truly a new way to examine the University of Pennsylvania", "title": "" } ]
scidocsrr
08ca617c476314328c6f79f17a8bf40d
Developing and evaluating a mobile driver fatigue detection network based on electroencephalograph signals
[ { "docid": "7e70955671d2ad8728fdba0fc3ec5548", "text": "Detection of drowsiness based on extraction of IMF’s from EEG signal using EMD process and characterizing the features using trained Artificial Neural Network (ANN) is introduced in this paper. Our subjects are 8 volunteers who have not slept for last 24 hour due to travelling. EEG signal was recorded when the subject is sitting on a chair facing video camera and are obliged to see camera only. ANN is trained using a utility made in Matlab to mark the EEG data for drowsy state and awaked state and then extract IMF’s of marked data using EMD to prepare feature inputs for Neural Network. Once the neural network is trained, IMFs of New subjects EEG Signals is given as input and ANN will give output in two different states i.e. ‘drowsy’ or ‘awake’. The system is tested on 8 different subjects and it provided good results with more than 84.8% of correct detection of drowsy states.", "title": "" }, { "docid": "33a1cfafef3acf0a27d6622e04147000", "text": "BACKGROUND\nEntropy is a nonlinear index that can reflect the degree of chaos within a system. It is often used to analyze epileptic electroencephalograms (EEG) to detect whether there is an epileptic attack. Much research into the state inspection of epileptic seizures has been conducted based on sample entropy (SampEn). However, the study of epileptic seizures based on fuzzy entropy (FuzzyEn) has lagged behind.\n\n\nNEW METHODS\nWe propose a method of state inspection of epileptic seizures based on FuzzyEn. The method first calculates the FuzzyEn of EEG signals from different epileptic states, and then feature selection is conducted to obtain classification features. Finally, we use the acquired classification features and a grid optimization method to train support vector machines (SVM).\n\n\nRESULTS\nThe results of two open-EEG datasets in epileptics show that there are major differences between seizure attacks and non-seizure attacks, such that FuzzyEn can be used to detect epilepsy, and our method obtains better classification performance (accuracy, sensitivity and specificity of classification of the CHB-MIT are 98.31%, 98.27% and 98.36%, and of the Bonn are 100%, 100%, 100%, respectively).\n\n\nCOMPARISONS WITH EXISTING METHOD(S)\nTo verify the performance of the proposed method, a comparison of the classification performance for epileptic seizures using FuzzyEn and SampEn is conducted. Our method obtains better classification performance, which is superior to the SampEn-based methods currently in use.\n\n\nCONCLUSIONS\nThe results indicate that FuzzyEn is a better index for detecting epileptic seizures effectively. The FuzzyEn-based method is preferable, exhibiting potential desirable applications for medical treatment.", "title": "" }, { "docid": "8238edb8ec7b9b1dd076c61c619b5da3", "text": "Two complexity parameters of EEG, i.e. approximate entropy (ApEn) and Kolmogorov complexity (Kc) are utilized to characterize the complexity and irregularity of EEG data under the different mental fatigue states. Then the kernel principal component analysis (KPCA) and Hidden Markov Model (HMM) are combined to differentiate two mental fatigue states. The KPCA algorithm is employed to extract nonlinear features from the complexity parameters of EEG and improve the generalization performance of HMM. The investigation suggests that ApEn and Kc can effectively describe the dynamic complexity of EEG, which is strongly correlated with mental fatigue. Both complexity parameters are significantly decreased (P < 0.005) as the mental fatigue level increases. These complexity parameters may be used as the indices of the mental fatigue level. Moreover, the joint KPCA–HMM method can effectively reduce the dimensionality of the feature vectors, accelerate the classification speed and achieve higher classification accuracy (84%) of mental fatigue. Hence KPCA–HMM could be a promising model for the estimation of mental fatigue. Crown Copyright 2010 Published by Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "c120406dd4e60a9bb33dd4a87cbd3616", "text": "Intersubjectivity is an important concept in psychology and sociology. It refers to sharing conceptualizations through social interactions in a community and using such shared conceptualization as a resource to interpret things that happen in everyday life. In this work, we make use of intersubjectivity as the basis to model shared stance and subjectivity for sentiment analysis. We construct an intersubjectivity network which links review writers, terms they used, as well as the polarities of the terms. Based on this network model, we propose a method to learn writer embeddings which are subsequently incorporated into a convolutional neural network for sentiment analysis. Evaluations on the IMDB, Yelp 2013 and Yelp 2014 datasets show that the proposed approach has achieved the state-of-the-art performance.", "title": "" }, { "docid": "aa7114bf0038f2ab4df6908ed7d28813", "text": "Sematch is an integrated framework for the development, evaluation and application of semantic similarity for Knowledge Graphs. The framework provides a number of similarity tools and datasets, and allows users to compute semantic similarity scores of concepts, words, and entities, as well as to interact with Knowledge Graphs through SPARQL queries. Sematch focuses on knowledge-based semantic similarity that relies on structural knowledge in a given taxonomy (e.g. depth, path length, least common subsumer), and statistical information contents. Researchers can use Sematch to develop and evaluate semantic similarity metrics and exploit these metrics in applications. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "37dc4a306f043684042e6af01223a275", "text": "In recent years, studies about control methods for complex machines and robots have been developed rapidly. Biped robots are often treated as inverted pendulums for its simple structure. But modeling of robot and other complex machines is a time-consuming procedure. A new method of modeling and simulation of robot based on SimMechanics is proposed in this paper. Physical modeling, parameter setting and simulation are presented in detail. The SimMechanics block model is first used in modeling and simulation of inverted pendulums. Simulation results of the SimMechanics block model and mathematical model for single inverted pendulum are compared. Furthermore, a full state feedback controller is designed to satisfy the performance requirement. It indicates that SimMechanics can be used for unstable nonlinear system and robots.", "title": "" }, { "docid": "86c998f5ffcddb0b74360ff27b8fead4", "text": "Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.", "title": "" }, { "docid": "a3642ac7aff09f038df823bc2bab3b95", "text": "We assess the risk of phishing on mobile platforms. Mobile operating systems and browsers lack secure application identity indicators, so the user cannot always identify whether a link has taken her to the expected application. We conduct a systematic analysis of ways in which mobile applications and web sites link to each other. To evaluate the risk, we study 85 web sites and 100 mobile applications and discover that web sites and applications regularly ask users to type their passwords into contexts that are vulnerable to spoofing. Our implementation of sample phishing attacks on the Android and iOS platforms demonstrates that attackers can spoof legitimate applications with high accuracy, suggesting that the risk of phishing attacks on mobile platforms is greater than has previously been appreciated.", "title": "" }, { "docid": "c777d2fcc2a27ca17ea82d4326592948", "text": "The existing methods for image captioning usually train the language model under the cross entropy loss, which results in the exposure bias and inconsistency of evaluation metric. Recent research has shown these two issues can be well addressed by policy gradient method in reinforcement learning domain attributable to its unique capability of directly optimizing the discrete and non-differentiable evaluation metric. In this paper, we utilize reinforcement learning method to train the image captioning model. Specifically, we train our image captioning model to maximize the overall reward of the sentences by adopting the temporal-difference (TD) learning method, which takes the correlation between temporally successive actions into account. In this way, we assign different values to different words in one sampled sentence by a discounted coefficient when back-propagating the gradient with the REINFORCE algorithm, enabling the correlation between actions to be learned. Besides, instead of estimating a “baseline” to normalize the rewards with another network, we utilize the reward of another Monte-Carlo sample as the “baseline” to avoid high variance. We show that our proposed method can improve the quality of generated captions and outperforms the state-of-the-art methods on the benchmark dataset MS COCO in terms of seven evaluation metrics.", "title": "" }, { "docid": "5b50e84437dc27f5b38b53d8613ae2c7", "text": "We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods.", "title": "" }, { "docid": "0109c8c7663df5e8ac2abd805924d9f6", "text": "To ensure system stability and availability during disturbances, industrial facilities equipped with on-site generation, generally utilize some type of load shedding scheme. In recent years, conventional underfrequency and PLC-based load shedding schemes have been integrated with computerized power management systems to provide an “automated” load shedding system. However, these automated solutions lack system operating knowledge and are still best-guess methods which typically result in excessive or insufficient load shedding. An intelligent load shedding system can provide faster and optimal load relief by utilizing actual operating conditions and knowledge of past system disturbances. This paper presents the need for an intelligent, automated load shedding system. Simulation of case studies for two industrial electrical networks are performed to demonstrate the advantages of an intelligent load shedding system over conventional load shedding methods from the design and operation perspectives. Index Terms — Load Shedding (LS), Intelligent Load Shedding (ILS), Power System Transient Stability, Frequency Relay, Programmable Logic Controller (PLC), Power Management System", "title": "" }, { "docid": "ff4cfe56f31e21a8f69164790eb39634", "text": "Active individuals often perform exercises in the heat following heat stress exposure (HSE) regardless of the time-of-day and its variation in body temperature. However, there is no information concerning the diurnal effects of a rise in body temperature after HSE on subsequent exercise performance in a hot environnment. This study therefore investigated the diurnal effects of prior HSE on both sprint and endurance exercise capacity in the heat. Eight male volunteers completed four trials which included sprint and endurance cycling tests at 30 °C and 50% relative humidity. At first, volunteers completed a 30-min pre-exercise routine (30-PR): a seated rest in a temperate environment in AM (AmR) or PM (PmR) (Rest trials); and a warm water immersion at 40 °C to induce a 1 °C increase in core temperature in AM (AmW) or PM (PmW) (HSE trials). Volunteers subsequently commenced exercise at 0800 h in AmR/AmW and at 1700 h in PmR/PmW. The sprint test determined a 10-sec maximal sprint power at 5 kp. Then, the endurance test was conducted to measure time to exhaustion at 60% peak oxygen uptake. Maximal sprint power was similar between trials (p = 0.787). Time to exhaustion in AmW (mean±SD; 15 ± 8 min) was less than AmR (38 ± 16 min; p < 0.01) and PmR (43 ± 24 min; p < 0.01) but similar with PmW (24 ± 9 min). Core temperature was higher from post 30-PR to 6 min into the endurance test in AmW and PmW than AmR and PmR (p < 0.05) and at post 30-PR and the start of the endurance test in PmR than AmR (p < 0.05). The rate of rise in core temperature during the endurance test was greater in AmR than AmW and PmW (p < 0.05). Mean skin temperature was higher from post 30-PR to 6 min into the endurance test in HSE trials than Rest trials (p < 0.05). Mean body temperature was higher from post 30-PR to 6 min into the endurance test in AmW and PmW than AmR and PmR (p < 0.05) and the start to 6 min into the endurance test in PmR than AmR (p < 0.05). Convective, radiant, dry and evaporative heat losses were greater on HSE trials than on Rest trials (p < 0.001). Heart rate and cutaneous vascular conductance were higher at post 30-PR in HSE trials than Rest trials (p < 0.05). Thermal sensation was higher from post 30-PR to the start of the endurance test in AmW and PmW than AmR and PmR (p < 0.05). Perceived exertion from the start to 6 min into the endurance test was higher in HSE trials than Rest trials (p < 0.05). This study demonstrates that an approximately 1 °C increase in core temperature by prior HSE has the diurnal effects on endurance exercise capacity but not on sprint exercise capacity in the heat. Moreover, prior HSE reduces endurance exercise capacity in AM, but not in PM. This reduction is associated with a large difference in pre-exercise core temperature between AM trials which is caused by a relatively lower body temperature in the morning due to the time-of-day variation and contributes to lengthening the attainment of high core temperature during exercise in AmR.", "title": "" }, { "docid": "cae4703a50910c7718284c6f8230a4bc", "text": "Autonomous helicopter flight is widely regarded to be a highly challenging control problem. Despite this fact, human experts can reliably fly helicopters through a wide range of maneuvers, including aerobatic maneuvers at the edge of the helicopter’s capabilities. We present apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks being demonstrated by an expert. These apprenticeship learning algorithms have enabled us to significantly extend the state of the art in autonomous helicopter aerobatics. Our experimental results include the first autonomous execution of a wide range of maneuvers, including but not limited to in-place flips, in-place rolls, loops and hurricanes, and even auto-rotation landings, chaos and tic-tocs, which only exceptional human pilots can perform. Our results also include complete airshows, which require autonomous transitions between many of these maneuvers. Our controllers perform as well as, and often even better than, our expert pilot.", "title": "" }, { "docid": "de760fd6990bcf3e980e5fab24757621", "text": "The concept of ‘open innovation’ has received a considerable amount of coverage within the academic literature and beyond. Much of this seems to have been without much critical analysis of the evidence. In this paper, we show how Chesbrough creates a false dichotomy by arguing that open innovation is the only alternative to a closed innovation model. We systematically examine the six principles of the open innovation concept and show how the Open Innovation paradigm has created a partial perception by describing something which is undoubtedly true in itself (the limitations of closed innovation principles), but false in conveying the wrong impression that firms today follow these principles. We hope that our examination and scrutiny of the ‘open innovation’ concept contributes to the debate on innovation management and helps enrich our understanding.", "title": "" }, { "docid": "5305e147b2aa9646366bc13deb0327b0", "text": "This longitudinal case-study aimed at examining whether purposely teaching for the promotion of higher order thinking skills enhances students’ critical thinking (CT), within the framework of science education. Within a pre-, post-, and post–post experimental design, high school students, were divided into three research groups. The experimental group (n=57) consisted of science students who were exposed to teaching strategies designed for enhancing higher order thinking skills. Two other groups: science (n=41) and non-science majors (n=79), were taught traditionally, and acted as control. By using critical thinking assessment instruments, we have found that the experimental group showed a statistically significant improvement on critical thinking skills components and disposition towards critical thinking subscales, such as truth-seeking, open-mindedness, self-confidence, and maturity, compared with the control groups. Our findings suggest that if teachers purposely and persistently practice higher order thinking strategies for example, dealing in class with real-world problems, encouraging open-ended class discussions, and fostering inquiry-oriented experiments, there is a good chance for a consequent development of critical thinking capabilities.", "title": "" }, { "docid": "f7d64d093df1aa158636482af2dd7bff", "text": "Vision-based Human activity recognition is becoming a trendy area of research due to its wide application such as security and surveillance, human–computer interactions, patients monitoring system, and robotics. In the past two decades, there are several publically available human action, and activity datasets are reported based on modalities, view, actors, actions, and applications. The objective of this survey paper is to outline the different types of video datasets and highlights their merits and demerits under practical considerations. Based on the available information inside the dataset we can categorise these datasets into RGB (Red, Green, and Blue) and RGB-D(depth). The most prominent challenges involved in these datasets are occlusions, illumination variation, view variation, annotation, and fusion of modalities. The key specification of these datasets is discussed such as resolutions, frame rate, actions/actors, background, and application domain. We have also presented the state-of-the-art algorithms in a tabular form that give the best performance on such datasets. In comparison with earlier surveys, our works give a better presentation of datasets on the well-organised comparison, challenges, and latest evaluation technique on existing datasets.", "title": "" }, { "docid": "b902e401f0dd4cfa532a3561c9ab1d8c", "text": "Valerie the roboceptionist is the most recent addition to Carnegie Mellon's social robots project. A permanent installation in the entranceway to Newell-Simon hall, the robot combines useful functionality - giving directions, looking up weather forecasts, etc. - with an interesting and compelling character. We are using Valerie to investigate human-robot social interaction, especially long-term human-robot \"relationships\". Over a nine-month period, we have found that many visitors continue to interact with the robot on a daily basis, but that few of the individual interactions last for more than 30 seconds. Our analysis of the data has indicated several design decisions that should facilitate more natural human-robot interactions.", "title": "" }, { "docid": "0cc02773fd194c42071f8500a0c88261", "text": "Neuroscientific and psychological data suggest a close link between affordance and mirror systems in the brain. However, we still lack a full understanding of both the individual systems and their interactions. Here, we propose that the architecture and functioning of the two systems is best understood in terms of two challenges faced by complex organisms, namely: (a) the need to select among multiple affordances and possible actions dependent on context and high-level goals and (b) the exploitation of the advantages deriving from a hierarchical organisation of behaviour based on actions and action-goals. We first review and analyse the psychological and neuroscientific literature on the mechanisms and processes organisms use to deal with these challenges. We then analyse existing computational models thereof. Finally we present the design of a computational framework that integrates the reviewed knowledge. The framework can be used both as a theoretical guidance to interpret empirical data and design new experiments, and to design computational models addressing specific problems debated in the literature.", "title": "" }, { "docid": "f4d3cb8f68a2d36bd405643fcbbb0951", "text": "Persistent mucosal inflammation and microbial infection are characteristics of chronic rhinosinusitis (CRS). Mucosal microbiota dysbiosis is found in other chronic inflammatory diseases; however, the relationship between sinus microbiota composition and CRS is unknown. Using comparative microbiome profiling of a cohort of CRS patients and healthy subjects, we demonstrate that the sinus microbiota of CRS patients exhibits significantly reduced bacterial diversity compared with that of healthy controls. In our cohort of CRS patients, multiple, phylogenetically distinct lactic acid bacteria were depleted concomitant with an increase in the relative abundance of a single species, Corynebacterium tuberculostearicum. We recapitulated the conditions observed in our human cohort in a murine model and confirmed the pathogenic potential of C. tuberculostearicum and the critical necessity for a replete mucosal microbiota to protect against this species. Moreover, Lactobacillus sakei, which was identified from our comparative microbiome analyses as a potentially protective species, defended against C. tuberculostearicum sinus infection, even in the context of a depleted sinus bacterial community. These studies demonstrate that sinus mucosal health is highly dependent on the composition of the resident microbiota as well as identify both a new sino-pathogen and a strong bacterial candidate for therapeutic intervention.", "title": "" }, { "docid": "2e827d98d7aa9e5cc2c14759132b148e", "text": "In a world in which rational individuals may hold different prior beliefs, a sender can influence the behavior of a receiver by controlling the informativeness of an experiment (public signal). We characterize the set of distributions of posterior beliefs that can be induced by an experiment, and provide necessary and sufficient conditions for a sender to benefit from persuasion. We then provide sufficient conditions for the sender to benefit from persuasion for almost every pair of prior beliefs, even when there is no value of persuasion under a common prior. Our main condition is that the receiver’s action depends on his beliefs only through his expectation of some random variable. JEL classification: D72, D83, M31.", "title": "" }, { "docid": "2fd2553400cacc4dcb489460a7493dcb", "text": "Trustworthy generation of public random numbers is necessary for the security of a number of cryptographic applications. It was suggested to use the inherent unpredictability of blockchains as a source of public randomness. Entropy from the Bitcoin blockchain in particular has been used in lotteries and has been suggested for a number of other applications ranging from smart contracts to election auditing. In this Arcticle, we analyse this idea and show how an adversary could manipulate these random numbers, even with limited computational power and financial budget.", "title": "" }, { "docid": "ea0df495739877a54954a59bd6592221", "text": "Passive and semipassive UHF RF identification (RFID) systems have traditionally been designed using scalar-valued differential radar cross section (DRCS) methods to model the backscattered signal from the tag. This paper argues that scalar-valued DRCS analysis is unnecessarily limiting because of the inherent coherence of the backscatter link and the complex-valued nature of load-dependent antenna-mode scattering from an RFID tag. Considering modulated backscatter in terms of complex-valued scattered fields opens the possibility of quadrature modulation of the backscatter channel. When compared with binary amplitude shift keying (ASK) or phase shift keying (PSK) based RFID systems, which transmit 1 bit of data per symbol period, and thus 1 bit per on-chip clock oscillator period, tags employing vector backscatter modulation can transmit more than 1 bit per symbol period. This increases the data rate for a given on-chip symbol clock rate leading to reduced on-chip power consumption and extended read range. Alternatively, tags employing an M-ary modulator can achieve log2 M higher data throughput at essentially the same dc power consumption as a tag employing binary ASK or PSK. In contrast to the binary ASK or PSK backscatter modulation employed by passive and semipassive UHF RFID tags, such as tags compliant with the widely used ISO18000-6c standard, this paper explores a novel CMOS-compatible method for generating M-ary quadrature amplitude modulated (QAM) backscatter modulation. A new method is presented for designing an inductorless M-ary QAM backscatter modulator using only an array of switched resistances and capacitances. Device-level simulation and measurements of a four-state phase shift keying (4-PSK)/four-state quadrature amplitude modulated (4-QAM) modulator are provided for a semipassive (battery-assisted) tag operating in the 850-950-MHz band. This first prototype modulator transmits 4-PSK/4-QAM at a symbol rate of 200 kHz and a bit rate of 400 kb/s at a static power dissipation of only 115 nW.", "title": "" }, { "docid": "eb9f34cd2b10f1c8099aad5e9064578a", "text": "Deep distance metric learning (DDML), which is proposed to learn image similarity metrics in an end-toend manner based on the convolution neural network, has achieved encouraging results in many computer vision tasks. L2-normalization in the embedding space has been used to improve the performance of several DDML methods. However, the commonly used Euclidean distance is no longer an accurate metric for L2-normalized embedding space, i.e., a hyper-sphere. Another challenge of current DDML methods is that their loss functions are usually based on rigid data formats, such as the triplet tuple. Thus, an extra process is needed to prepare data in specific formats. In addition, their losses are obtained from a limited number of samples, which leads to a lack of the global view of the embedding space. In this paper, we replace the Euclidean distance with the cosine similarity to better utilize the L2-normalization, which is able to attenuate the curse of dimensionality. More specifically, a novel loss function based on the von Mises-Fisher distribution is proposed to learn a compact hyper-spherical embedding space. Moreover, a new efficient learning algorithm is developed to better capture the global structure of the embedding space. Experiments for both classification and retrieval tasks on several standard datasets show that our method achieves state-of-the-art performance with a simpler training procedure. Furthermore, we demonstrate that, even with a small number of convolutional layers, our model can still obtain significantly better classification performance than the widely used softmax loss.", "title": "" } ]
scidocsrr
9549bd01cfb4846e41fe748616ac774d
Quantum Computability
[ { "docid": "9fee40982932ec710af76ee322397d62", "text": "1 Abstract This paper surveys the use of thèhybrid argument' to prove to that quantum computations are insensitive to small perturbations. This property of quantum computations is used to establish that quantum circuits are robust against inaccuracy in the implementation of its elementary quantum gates. The insensitivity to small perturbations is also used to establish lower-bounds, including showing that relative to an oracle, the class NP requires exponential time on a quantum computer; and that quantum algorithms are polynomially related to deterministic algorithms in the black box model. 2 Introduction Quantum computation is an exciting area that lies at the foundations of both quantum physics and computer science. Quantum computers appear to violate the modern form of the Church-Turing thesis which states that Supported by a JSEP grant.", "title": "" } ]
[ { "docid": "836001910512e8bd7f71f4ac7448a6dd", "text": "We have developed a high-speed 1310-nm Al-MQW buried-hetero laser having 29-GHz bandwidth (BW). The laser was used to compare 28-Gbaud four-level pulse-amplitude-modulation (PAM4) and 56-Gb/s nonreturn to zero (NRZ) transmission performance. In both cases, it was possible to meet the 10-km link budget, however, 56-Gb/s NRZ operation achieved a 2-dB better sensitivity, attributable to the wide BW of the directly modulated laser and the larger eye amplitude for the NRZ format. On the other hand, the advantages for 28-Gbaud PAM4 were the reduced BW requirement for both the transmitter and the receiver PIN diode, which enabled us to use a lower bias to the laser and a PIN with a higher responsivity, or conversely enable the possibility of high temperature operation with lower power consumption. Both formats showed a negative dispersion penalty compared to back-to-back sensitivity using a negative fiber dispersion of -60 ps/nm, which was expected from the observed chirp characteristics of the laser. The reliability study up to 11 600 h at 85 °C under accelerated conditions showed no decrease in the output power at a constant bias of 60 mA.", "title": "" }, { "docid": "81cd5f1b4000603aca73629d3e158593", "text": "Document images captured by a mobile phone camera often have perspective distortions. Efficiency and accuracy are two important issues in designing a rectification system for such perspective documents. In this paper, we propose a new perspective rectification system based on vanishing point detection. This system achieves both the desired efficiency and accuracy using a multi-stage strategy: at the first stage, document boundaries and straight lines are used to compute vanishing points; at the second stage, text baselines and block aligns are utilized; and at the last stage, character tilt orientations are voted for the vertical vanishing point. A profit function is introduced to evaluate the reliability of detected vanishing points at each stage. If vanishing points at one stage are reliable, then rectification is ended at that stage. Otherwise, our method continues to seek more reliable vanishing points in the next stage. We have tested this method with more than 400 images including paper documents, signboards and posters. The image acceptance rate is more than 98.5% with an average speed of only about 60 ms.", "title": "" }, { "docid": "6a1fb3fc062adacbc2184d80f52a7a50", "text": "This paper proposes a flyback converter with a new noncomplementary active clamp control method. With the proposed control method, the energy in the leakage inductance can be fully recycled. The soft switching can be achieved for the main switch and the absorbed leakage energy is transferred to the output and input side. Compared to the conventional active clamp technique, the proposed methods can achieve high efficiency both for heavy-load and light-load condition, and the efficiency is almost not affected by the leakage inductance. The detailed operation principle and design considerations are presented. Performance of the proposed circuit is validated by the experimental results from a 16 V/4 A prototype.", "title": "" }, { "docid": "fc41fe625a7889272373c4b3f4757fb1", "text": "One of the biggest challenges in non-rigid shape retrieval and comparison is the design of a shape descriptor that would maintain invariance under a wide class of transformations the shape can undergo. Recently, heat kernel signature was introduced as an intrinsic local shape descriptor based on diffusion scale-space analysis. In this paper, we develop a scale-invariant version of the heat kernel descriptor. Our construction is based on a logarithmically sampled scale-space in which shape scaling corresponds, up to a multiplicative constant, to a translation. This translation is undone using the magnitude of the Fourier transform. The proposed scale-invariant local descriptors can be used in the bag-of-features framework for shape retrieval in the presence of transformations such as isometric deformations, missing data, topological noise, and global and local scaling. We get significant performance improvement over state-of-the-art algorithms on recently established non-rigid shape retrieval benchmarks.", "title": "" }, { "docid": "0f25cfa80ee503aa5012772ac54fb7a3", "text": "Parameter reduction has been an important topic in deep learning due to the everincreasing size of deep neural network models and the need to train and run them on resource limited machines. Despite many efforts in this area, there were no rigorous theoretical guarantees on why existing neural net compression methods should work. In this paper, we provide provable guarantees on some hashing-based parameter reduction methods in neural nets. First, we introduce a neural net compression scheme based on random linear sketching (which is usually implemented efficiently via hashing), and show that the sketched (smaller) network is able to approximate the original network on all input data coming from any smooth and wellconditioned low-dimensional manifold. The sketched network can also be trained directly via back-propagation. Next, we study the previously proposed HashedNets architecture and show that the optimization landscape of one-hidden-layer HashedNets has a local strong convexity property similar to a normal fully connected neural network. We complement our theoretical results with empirical verifications.", "title": "" }, { "docid": "961cc1dc7063706f8f66fc136da41661", "text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.", "title": "" }, { "docid": "bf8216ad7caf73cf63b988993b439412", "text": "Clothing retrieval and clothing style recognition are important and practical problems. They have drawn a lot of attention in recent years. However, the clothing photos collected in existing datasets are mostly of front- or near-front view. There are no datasets designed to study the influences of different viewing angles on clothing retrieval performance. To address view-invariant clothing retrieval problem properly, we construct a challenge clothing dataset, called Multi-View Clothing dataset. This dataset not only has four different views for each clothing item, but also provides 264 attributes for describing clothing appearance. We adopt a state-of-the-art deep learning method to present baseline results for the attribute prediction and clothing retrieval performance. We also evaluate the method on a more difficult setting, cross-view exact clothing item retrieval. Our dataset will be made publicly available for further studies towards view-invariant clothing retrieval.", "title": "" }, { "docid": "3ef1f71f47175d2687d5c11b0d023162", "text": "In attempting to fit a model of analogical problem solving to protocol data of students solving physics problems, several unexpected observations were made. Analogies between examples and exercises (a form of case-based reasoning) consisted of two distinct types of events. During an initialization event, the solver retrieved an example, set up a mapping between it and the problem, and decided whether the example was useful. During a transfer event, the solver inferred something about the problem’s solution. Many different types of initialization and transfer events were observed. Poor solvers tended to follow the example verbatim, copying each solution line over to the problem. Good solvers tried to solve the problem themselves, but referred to the example when they got stuck, or wanted to check a step, or wanted to avoid a detailed calculation. Rather than learn from analogies, both Good and Poor solvers tended to repeat analogies at subsequent similar situations. A revised version of the model is proposed (but not yet implemented) that appears to be consistent with all the findings observed in this and other studies of the same subjects.", "title": "" }, { "docid": "40e38080e12b2d73836fcb1cf79db033", "text": "The research in statistical parametric speech synthesis is towards improving naturalness and intelligibility. In this work, the deviation in spectral tilt of the natural and synthesized speech is analyzed and observed a large gap between the two. Furthermore, the same is analyzed for different classes of sounds, namely low-vowels, mid-vowels, high-vowels, semi-vowels, nasals, and found to be varying with category of sound units. Based on variation, a novel method for spectral tilt enhancement is proposed, where the amount of enhancement introduced is different for different classes of sound units. The proposed method yields improvement in terms of intelligibility, naturalness, and speaker similarity of the synthesized speech.", "title": "" }, { "docid": "ff3c50ecbd71b7c2ce6e4207dae73b3b", "text": "Information has emerged as an agent of integration and the enabler of new competitiveness for today’s enterprise in the global marketplace. However, has the paradigm of strategic planning changed sufficiently to support the new role of information systems and technology? We reviewed the literature for commonly used or representative information planning methodologies and found that a new approach is needed. There are six methodologies reviewed in this paper. They all tend to regard planning as a separate stage which does not connect structurally and directly to the information systems development. An integration of planning with development and management through enterprise information resources which capture and characterize the enterprise will shorten the response cycle and even allow for economic evaluation of information system investment.", "title": "" }, { "docid": "99d99ce673dfc4a6f5bf3e7d808a5570", "text": "We introduce an online popularity prediction and tracking task as a benchmark task for reinforcement learning with a combinatorial, natural language action space. A specified number of discussion threads predicted to be popular are recommended, chosen from a fixed window of recent comments to track. Novel deep reinforcement learning architectures are studied for effective modeling of the value function associated with actions comprised of interdependent sub-actions. The proposed model, which represents dependence between sub-actions through a bi-directional LSTM, gives the best performance across different experimental configurations and domains, and it also generalizes well with varying numbers of recommendation requests.", "title": "" }, { "docid": "838c50eaf711cfb30839feb826e30171", "text": "Security is a concern in the design of a wide range of embedded systems. Extensive research has been devoted to the development of cryptographic algorithms that provide the theoretical underpinnings of information security. Functional security mechanisms, such as security protocols, suitably employ these mathematical primitives in order to achieve the desired security objectives. However, functional security mechanisms alone cannot ensure security, since most embedded systems present attackers with an abundance of opportunities to observe or interfere with their implementation, and hence to compromise their theoretical strength. This paper surveys various tamper or attack techniques, and explains how they can be used to undermine or weaken security functions in embedded systems. Tamper-resistant design refers to the process of designing a system architecture and implementation that is resistant to such attacks. We outline approaches that have been proposed to design tamper-resistant embedded systems, with examples drawn from recent commercial products.", "title": "" }, { "docid": "41b8fb6fd9237c584ce0211f94a828be", "text": "Over the last few years, two of the main research directions in machine learning of natural language processing have been the study of semi-supervised learning algorithms as a way to train classifiers when the labeled data is scarce, and the study of ways to exploit knowledge and global information in structured learning tasks. In this paper, we suggest a method for incorporating domain knowledge in semi-supervised learning algorithms. Our novel framework unifies and can exploit several kinds of task specific constraints. The experimental results presented in the information extraction domain demonstrate that applying constraints helps the model to generate better feedback during learning, and hence the framework allows for high performance learning with significantly less training data than was possible before on these tasks.", "title": "" }, { "docid": "01055f9b1195cd7d03b404f3d530bb55", "text": "In recent years there has been an increasing interest in approaches to scientific summarization that take advantage of the citations a research paper has received in order to extract its main contributions. In this context, the CL-SciSumm 2017 Shared Task has been proposed to address citation-based information extraction and summarization. In this paper we present several systems to address three of the CL-SciSumm tasks. Notably, unsupervised systems to match citing and cited sentences (Task 1A), a supervised approach to identify the type of information being cited (Task 1B), and a supervised citation-based summarizer (Task 2).", "title": "" }, { "docid": "f0a82f428ac508351ffa7b97bb909b60", "text": "Automated Teller Machines (ATMs) can be considered among one of the most important service facilities in the banking industry. The investment in ATMs and the impact on the banking industry is growing steadily in every part of the world. The banks take into consideration many factors like safety, convenience, visibility, and cost in order to determine the optimum locations of ATMs. Today, ATMs are not only available in bank branches but also at retail locations. Another important factor is the cash management in ATMs. A cash demand model for every ATM is needed in order to have an efficient cash management system. This forecasting model is based on historical cash demand data which is highly related to the ATMs location. So, the location and the cash management problem should be considered together. This paper provides a general review on studies, efforts and development in ATMs location and cash management problem. Keywords—ATM location problem, cash management problem, ATM cash replenishment problem, literature review in ATMs.", "title": "" }, { "docid": "0e7910a66c78434daace620de9b49cfe", "text": "Gravity balancing has great significance in many rehabilitation applications and one example is in minimizing the resistance of lifting a patient's upper limb due to its own weight. As compared to commonly used active method in exoskeleton, passive weight compensation offers many other advantages such as elimination of motors which reduces the overall load acting on the patient. However, the torque about each joint due to the weight of the upper and fore arm is dependent on the orientation and location of each other, which makes it a challenge to compensate. Hence, this paper proposes and develop a passive method to compensate the torques on each joint. The proposed design includes a decoupling mechanism for isolating the torsional effect between various linkages and a realizable torsional compliant beam which can provide a specific torsional stiffness function to compensate the torque due to the arm's weight. In place of the usual expensive zero-length spring as the solution, an algorithm that generates the appropriate compliant beam design is also presented in this paper. The end result of the manipulator arm is a reduced required joint torque for gravity compensation using only passive means. A two-DOF prototype, with easy possible extension of additional DOF, has been built, and experiments are performed to illustrate, verify and quantify the effectiveness of the proposed method.", "title": "" }, { "docid": "d3c3195b8272bd41d0095e236ddb1d96", "text": "The extension of in vivo optical imaging for disease screening and image-guided surgical interventions requires brightly emitting, tissue-specific materials that optically transmit through living tissue and can be imaged with portable systems that display data in real-time. Recent work suggests that a new window across the short-wavelength infrared region can improve in vivo imaging sensitivity over near infrared light. Here we report on the first evidence of multispectral, real-time short-wavelength infrared imaging offering anatomical resolution using brightly emitting rare-earth nanomaterials and demonstrate their applicability toward disease-targeted imaging. Inorganic-protein nanocomposites of rare-earth nanomaterials with human serum albumin facilitated systemic biodistribution of the rare-earth nanomaterials resulting in the increased accumulation and retention in tumour tissue that was visualized by the localized enhancement of infrared signal intensity. Our findings lay the groundwork for a new generation of versatile, biomedical nanomaterials that can advance disease monitoring based on a pioneering infrared imaging technique.", "title": "" }, { "docid": "b46a967ad85c5b64c0f14f703d385b24", "text": "Bitcoin has shown great utility around the world with the drastic increase in its value and global consensus method of proof-of-work (POW). Over the years after the revolution in the digital transaction space, we are looking at major scalability issue with old POW consensus method and bitcoin peak limit of processing only 7 transactions per second. With more companies trying to adopt blockchain to modify their existing systems, blockchain working on old consensus methods and with scalability issues can't deliver the optimal solution. Specifically, with new trends like smart contracts and DAPPs, much better performance is needed to support any actual business applications. Such requirements are pushing the new platforms away from old methods of consensus and adoption of off-chain solutions. In this paper, we discuss various scalability issues with the Bitcoin and Ethereum blockchain and recent proposals like the lighting protocol, sharding, super quadratic sharding, DPoS to solve these issues. We also draw the comparison between these proposals on their ability to overcome scalability limits and highlighting major problems in these approaches. In the end, we propose our solution to suffice the scalability issue and conclude with the fact that with better scalability, blockchain has the potential to outrageously support varied domains of the industry.", "title": "" }, { "docid": "94ce8a264c9d2f36a58a026fa8e0be0e", "text": "Developing effective retrieval models is a long-standing central challenge in information retrieval research. In order to develop more effective models, it is necessary to understand the deficiencies of the current retrieval models and the relative strengths of each of them. In this article, we propose a general methodology to analytically and experimentally diagnose the weaknesses of a retrieval function, which provides guidance on how to further improve its performance. Our methodology is motivated by the empirical observation that good retrieval performance is closely related to the use of various retrieval heuristics. We connect the weaknesses and strengths of a retrieval function with its implementations of these retrieval heuristics, and propose two strategies to check how well a retrieval function implements the desired retrieval heuristics. The first strategy is to formalize heuristics as constraints, and use constraint analysis to analytically check the implementation of retrieval heuristics. The second strategy is to define a set of relevance-preserving perturbations and perform diagnostic tests to empirically evaluate how well a retrieval function implements retrieval heuristics. Experiments show that both strategies are effective to identify the potential problems in implementations of the retrieval heuristics. The performance of retrieval functions can be improved after we fix these problems.", "title": "" }, { "docid": "31aaa3c89b6810abdd424cc88b89d5a4", "text": "Recommendation techniques have been well developed in the past decades. Most of them build models only based on user item rating matrix. However, in real world, there is plenty of auxiliary information available in recommendation systems. We can utilize these information as additional features to improve recommendation performance. We refer to recommendation with auxiliary information as context-aware recommendation. Context-aware Factorization Machines (FM) is one of the most successful context-aware recommendation models. FM models pairwise interactions between all features, in such way, a certain feature latent vector is shared to compute the factorized parameters it involved. In practice, there are tens of context features and not all the pairwise feature interactions are useful. Thus, one important challenge for context-aware recommendation is how to effectively select \"good\" interaction features. In this paper, we focus on solving this problem and propose a greedy interaction feature selection algorithm based on gradient boosting. Then we propose a novel Gradient Boosting Factorization Machine (GBFM) model to incorporate feature selection algorithm with Factorization Machines into a unified framework. The experimental results on both synthetic and real datasets demonstrate the efficiency and effectiveness of our algorithm compared to other state-of-the-art methods.", "title": "" } ]
scidocsrr
c5e9dfa542be58dc5faba95a1a9335f7
Automatic Mood Classification for Music
[ { "docid": "c4421784554095ffed1365b3ba41bdc0", "text": "Mood classification of music is an emerging domain of music information retrieval. In the approach presented here features extracted from an audio file are used in combination with the affective value of song lyrics to map a song onto a psychologically based emotion space. The motivation behind this system is the lack of intuitive and contextually aware playlist generation tools available to music listeners. The need for such tools is made obvious by the fact that digital music libraries are constantly expanding, thus making it increasingly difficult to recall a particular song in the library or to create a playlist for a specific event. By combining audio content information with context-aware data, such as song lyrics, this system allows the listener to automatically generate a playlist to suit their current activity or mood. Thesis Supervisor: Barry Vercoe Title: Professor of Media Arts and Sciences, Program in Media Arts and Sciences", "title": "" } ]
[ { "docid": "b1202b110ae83980a71b14d9d6fd65cb", "text": "In modern daily life people need to move, whether in business or leisure, sightseeing or addressing a meeting. Often this is done in familiar environments, but in some cases we need to find our way in unfamiliar scenarios. Visual impairment is a factor that greatly reduces mobility. Currently, the most widespread and used means by the visually impaired people are the white stick and the guide dog; however both present some limitations. With the recent advances in inclusive technology it is possible to extend the support given to people with visual impairment during their mobility. In this context we propose a system, named SmartVision, whose global objective is to give blind users the ability to move around in unfamiliar environments, whether indoor or outdoor, through a user friendly interface that is fed by a geographic information system (GIS). In this paper we propose the development of an electronic white cane that helps moving around, in both indoor and outdoor environments, providing contextualized geographical information using RFID technology.", "title": "" }, { "docid": "a6459555eb54297f623800bcdf10dcc6", "text": "Phishing causes billions of dollars in damage every year and poses a serious threat to the Internet economy. Email is still the most commonly used medium to launch phishing attacks [1]. In this paper, we present a comprehensive natural language based scheme to detect phishing emails using features that are invariant and fundamentally characterize phishing. Our scheme utilizes all the information present in an email, namely, the header, the links and the text in the body. Although it is obvious that a phishing email is designed to elicit an action from the intended victim, none of the existing detection schemes use this fact to identify phishing emails. Our detection protocol is designed specifically to distinguish between “actionable” and “informational” emails. To this end, we incorporate natural language techniques in phishing detection. We also utilize contextual information, when available, to detect phishing: we study the problem of phishing detection within the contextual confines of the user’s email box and demonstrate that context plays an important role in detection. To the best of our knowledge, this is the first scheme that utilizes natural language techniques and contextual information to detect phishing. We show that our scheme outperforms existing phishing detection schemes. Finally, our protocol detects phishing at the email level rather than detecting masqueraded websites. This is crucial to prevent the victim from clicking any harmful links in the email. Our implementation called PhishNet-NLP, operates between a user’s mail transfer agent (MTA) and mail user agent (MUA) and processes each arriving email for phishing attacks even before reaching the", "title": "" }, { "docid": "ed0275192a690132f0d3242b883d8813", "text": "Significantly more carbon is stored in the world's soils—including peatlands, wetlands and permafrost—than is present in the atmosphere. Disagreement exists, however, regarding the effects of climate change on global soil carbon stocks. If carbon stored belowground is transferred to the atmosphere by a warming-induced acceleration of its decomposition, a positive feedback to climate change would occur. Conversely, if increases of plant-derived carbon inputs to soils exceed increases in decomposition, the feedback would be negative. Despite much research, a consensus has not yet emerged on the temperature sensitivity of soil carbon decomposition. Unravelling the feedback effect is particularly difficult, because the diverse soil organic compounds exhibit a wide range of kinetic properties, which determine the intrinsic temperature sensitivity of their decomposition. Moreover, several environmental constraints obscure the intrinsic temperature sensitivity of substrate decomposition, causing lower observed ‘apparent’ temperature sensitivity, and these constraints may, themselves, be sensitive to climate.", "title": "" }, { "docid": "695af0109c538ca04acff8600d6604d4", "text": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.", "title": "" }, { "docid": "cd81a67321e796a44f78e80479d35096", "text": "Nature-inspired intelligent swarm technologies deals with complex problems that might be impossible to solve using traditional technologies and approaches. Swarm intelligence techniques (note the difference from intelligent swarms) are population-based stochastic methods used in combinatorial optimization problems in which the collective behavior of relatively simple individuals arises from their local interactions with their environment to produce functional global patterns. Swarm intelligence represents a meta heuristic approach to solving a variety of problems", "title": "" }, { "docid": "2e3319cf6daead166c94345c52a8389a", "text": "Due to their high energy density and low material cost, lithium-sulfur batteries represent a promising energy storage system for a multitude of emerging applications, ranging from stationary grid storage to mobile electric vehicles. This review aims to summarize major developments in the field of lithium-sulfur batteries, starting from an overview of their electrochemistry, technical challenges and potential solutions, along with some theoretical calculation results to advance our understanding of the material interactions involved. Next, we examine the most extensively-used design strategy: encapsulation of sulfur cathodes in carbon host materials. Other emerging host materials, such as polymeric and inorganic materials, are discussed as well. This is followed by a survey of novel battery configurations, including the use of lithium sulfide cathodes and lithium polysulfide catholytes, as well as recent burgeoning efforts in the modification of separators and protection of lithium metal anodes. Finally, we conclude with an outlook section to offer some insight on the future directions and prospects of lithium-sulfur batteries.", "title": "" }, { "docid": "5682abf15b97eba47c5237ed12c6ec94", "text": "Land cover prediction is essential for monitoring global environmental change. Unfortunately, traditional classification models are plagued by temporal variation and emergence of novel/unseen land cover classes in the prediction process. In this paper, we propose an LSTM-based spatio-temporal learning framework with a dual-memory structure. The dual-memory structure captures both long-term and short-term temporal variation patterns, and is updated incrementally to adapt the model to the ever-changing environment. Moreover, we integrate zero-shot learning to identify unseen classes even without labelled samples. Experiments on both synthetic and real-world datasets demonstrate the superiority of the proposed framework over multiple baselines in land cover prediction.", "title": "" }, { "docid": "53e8333b3e4e9874449492852d948ea2", "text": "In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier/regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks.", "title": "" }, { "docid": "c421007cd20cf1adf5345fc0ef8d6604", "text": "A novel compact monopulse cavity-backed substrate integrated waveguide (SIW) antenna is proposed. The antenna consists of an array of four circularly polarized (CP) cavity-backed SIW antennas, three dual-mode hybrid coupler, and three input ports. TE10 and TE20 modes are excited in the dual-mode hybrid to produce sum and difference patterns, respectively. The antenna is modeled with a fast full-wave hybrid numerical method and also simulated using full-wave Ansoft HFSS. The whole antenna is integrated on a two-layer dielectric with the size of 42 mm × 36 mm. A prototype of the proposed monopulse antenna at the center frequency of 9.9 GHz is manufactured. Measured results show -10-dB impedance bandwidth of 2.4%, 3-dB axial ratio (AR) bandwidth of 1.75%, 12.3-dBi gain, and -28-dB null depth. The proposed antenna has good monopulse radiation characteristics, high efficiency, and can be easily integrated with planar circuits.", "title": "" }, { "docid": "7d0c25928504a9cb5879204eb3eeaf50", "text": "This article is the second of a two-part tutorial on visual servo control. In this tutorial, we have only considered velocity controllers. It is convenient for most of classical robot arms. However, the dynamics of the robot must of course be taken into account for high speed task, or when we deal with mobile nonholonomic or underactuated robots. As for the sensor, geometrical features coming from a classical perspective camera is considered. Features related to the image motion or coming from other vision sensors necessitate to revisit the modeling issues to select adequate visual features. Finally, fusing visual features with data coming from other sensors at the level of the control scheme will allow to address new research topics", "title": "" }, { "docid": "041b308fe83ac9d5a92e33fd9c84299a", "text": "Spaceborne synthetic aperture radar systems are severely constrained to a narrow swath by ambiguity limitations. Here a vertically scanned-beam synthetic aperture system (SCANSAR) is proposed as a solution to this problem. The potential length of synthetic aperture must be shared between beam positions, so the along-track resolution is poorer; a direct tradeoff exists between resolution and swath width. The length of the real aperture is independently traded against the number of scanning positions. Design curves and equations are presented for spaceborne SCANSARs for altitudes between 400 and 1400 km and inner angles of incidence between 20° and 40°. When the real antenna is approximately square, it may also be used for a microwave radiometer. The combined radiometer and synthetic-aperture (RADISAR) should be useful for those applications where the poorer resolution of the radiometer is useful for some purposes, but the finer resolution of the radar is needed for others.", "title": "" }, { "docid": "b4fa57fec99131cdf0cb6fc4795fce43", "text": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.", "title": "" }, { "docid": "9e289058f404720f73ee8240a84db54d", "text": "PURPOSE OF THE STUDY\nWe assessed whether a shared site intergenerational care program informed by contact theory contributed to more desirable social behaviors of elders and children during intergenerational programming than a center with a more traditional programming approach that lacks some or all of the contact theory tenets.\n\n\nDESIGN AND METHODS\nWe observed 59 elder and child participants from the two sites during intergenerational activities. Using the Intergenerational Observation Scale, we coded participants' predominant behavior in 15-s intervals through each activity's duration. We then calculated for each individual the percentage of time frames each behavior code was predominant.\n\n\nRESULTS\nParticipants at the theory-based program demonstrated higher rates of intergenerational interaction, higher rates of solitary behavior, and lower rates of watching than at the traditional program.\n\n\nIMPLICATIONS\nContact theory tenets were optimized when coupled with evidence-based practices. Intergenerational programs with stakeholder support that promotes equal group status, cooperation toward a common goal, and mechanisms of friendship among participants can achieve important objectives for elder and child participants in care settings.", "title": "" }, { "docid": "d34b0fe424d1f1b748a0929fe1a67cc5", "text": "Isolating sensitive data and state can increase the security and robustness of many applications. Examples include protecting cryptographic keys against exploits like OpenSSL’s Heartbleed bug or protecting a language runtime from native libraries written in unsafe languages. When runtime references across isolation boundaries occur relatively infrequently, then page-based hardware isolation can be used, because the cost of kernelor hypervisor-mediated domain switching is tolerable. However, some applications, such as isolating cryptographic session keys in a network-facing application or isolating frequently invoked native libraries in managed runtimes, require very frequent domain switching. In such applications, the overhead of kernelor hypervisormediated domain switching is prohibitive. In this paper, we present ERIM, a novel technique that provides hardware-enforced isolation with low overhead, even at high switching rates (ERIM’s average overhead is less than 1% for 100,000 switches per second). The key idea is to combine memory protection keys (MPKs), a feature recently added to Intel CPUs that allows protection domain switches in userspace, with binary inspection to prevent circumvention. We show that ERIM can be applied with little effort to new and existing applications, doesn’t require compiler changes, can run on a stock Linux kernel, and has low runtime overhead even at high domain switching rates.", "title": "" }, { "docid": "4b3d890a8891cd8c84713b1167383f6f", "text": "The present research tested the hypothesis that concepts of gratitude are prototypically organized and explored whether lay concepts of gratitude are broader than researchers' concepts of gratitude. In five studies, evidence was found that concepts of gratitude are indeed prototypically organized. In Study 1, participants listed features of gratitude. In Study 2, participants reliably rated the centrality of these features. In Studies 3a and 3b, participants perceived that a hypothetical other was experiencing more gratitude when they read a narrative containing central as opposed to peripheral features. In Study 4, participants remembered more central than peripheral features in gratitude narratives. In Study 5a, participants generated more central than peripheral features when they wrote narratives about a gratitude incident, and in Studies 5a and 5b, participants generated both more specific and more generalized types of gratitude in similar narratives. Throughout, evidence showed that lay conceptions of gratitude are broader than current research definitions.", "title": "" }, { "docid": "93cec060a420f2ffc3e67eb532186f8e", "text": "This paper presents an efficient approach to identify tabular structures within either electronic or paper documents. The resulting T—Recs system takes word bounding box information as input, and outputs the corresponding logical text block units (e.g. the cells within a table environment). Starting with an arbitrary word as block seed the algorithm recursively expands this block to all words that interleave with their vertical (north and south) neighbors. Since even smallest gaps of table columns prevent their words from mutual interleaving, this initial segmentation is able to identify and isolate such columns. In order to deal with some inherent segmentation errors caused by isolated lines (e.g. headers), overhanging words, or cells spawning more than one column, a series of postprocessing steps is added. These steps benefit from a very simple distinction between type 1 and type 2 blocks: type 1 blocks are those of at most one word per line, all others are of type 2. This distinction allows the selective application of heuristics to each group of blocks. The conjoint decomposition of column blocks into subsets of table cells leads to the final block segmentation of a homogeneous abstraction level. These segments serve the final layout analysis which identifies table environments and cells that are stretching over several rows and/or columns.", "title": "" }, { "docid": "57307626d8d2657b8ec60e4edfa81095", "text": "This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%−81.7, Sleep-EDF: 82.0%−76.9) compared with the state-of-the-art methods (MASS: 85.9%−80.5, Sleep-EDF: 78.9%−73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.", "title": "" }, { "docid": "1f2ec917e09792294b08d1d9ea380a97", "text": "Can humans fly? Emphatically no. Can cars eat? Again, absolutely not. Yet, these absurd inferences result from the current disregard for particular types of actors in action understanding. There is no work we know of on simultaneously inferring actors and actions in the video, not to mention a dataset to experiment with. Our paper hence marks the first effort in the computer vision community to jointly consider various types of actors undergoing various actions. To start with the problem, we collect a dataset of 3782 videos from YouTube and label both pixel-level actors and actions in each video. We formulate the general actor-action understanding problem and instantiate it at various granularities: both video-level single- and multiple-label actor-action recognition and pixel-level actor-action semantic segmentation. Our experiments demonstrate that inference jointly over actors and actions outperforms inference independently over them, and hence concludes our argument of the value of explicit consideration of various actors in comprehensive action understanding.", "title": "" }, { "docid": "5c18830610621c61ce910a97d5878e34", "text": "We report here on a quantitative technique called COBRA to determine DNA methylation levels at specific gene loci in small amounts of genomic DNA. Restriction enzyme digestion is used to reveal methylation-dependent sequence differences in PCR products of sodium bisulfite-treated DNA as described previously. We show that methylation levels in the original DNA sample are represented by the relative amounts of digested and undigested PCR product in a linearly quantitative fashion across a wide spectrum of DNA methylation levels. In addition, we show that this technique can be reliably applied to DNA obtained from microdissected paraffin-embedded tissue samples. COBRA thus combines the powerful features of ease of use, quantitative accuracy, and compatibility with paraffin sections.", "title": "" }, { "docid": "b0c2d9130a48fc0df8f428460b949741", "text": "A micro-strip patch antenna for a passive radio frequency identification (RFID) tag which can operate in the ultra high frequency (UHF) range from 865 MHz to 867 MHz is presented in this paper. The proposed antenna is designed and suitable for tagging the metallic boxes in the UK and Europe warehouse environment. The design is supplemented with the simulation results. In addition, the effect of the antenna substrate thickness and the ground plane on the performance of the proposed antenna is also investigated. The study shows that there is little affect by the antenna substrate thickness on the performance.", "title": "" } ]
scidocsrr
b14da4a8a9adf4bf479d1e5707b29046
CONSUMER'S PERCEPTION AND PURCHASE INTENTIONS
[ { "docid": "3f30c821132e07838de325c4f2183f84", "text": "This paper argues for the recognition of important experiential aspects of consumption. Specifically, a general framework is constructed to represent typical consumer behavior variables. Based on this paradigm, the prevailing information processing model is contrasted with an experiential view that focuses on the symbolic, hedonic, and esthetic nature of consumption. This view regards the consumption experience as a phenomenon directed toward the pursuit of fantasies, feelings, and fun.", "title": "" }, { "docid": "6307379eaab0db0726d791e38e533249", "text": "The present study aimed to examine the effectiveness of advertisements in enhancing consumers’ purchasing intention on Facebook in 2013. It is an applied study in terms of its goals, and a descriptive survey one in terms of methodology. The statistical population included all undergraduate students in Cypriot universities. An 11-item researcher-made questionnaire was used to compare and analyze the effectiveness of advertisements. Data analysis was carried out using SPSS17, the parametric statistical method of t-test, and the non-parametric Friedman test. The results of the study showed that Facebook advertising significantly affected brand image and brand equity, both of which factors contributed to a significant change in purchasing intention. 2015 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "efb305d95cf7197877de0b2fb510f33a", "text": "Drug-induced cardiotoxicity is emerging as an important issue among cancer survivors. For several decades, this topic was almost exclusively associated with anthracyclines, for which cumulative dose-related cardiac damage was the limiting step in their use. Although a number of efforts have been directed towards prediction of risk, so far no consensus exists on the strategies to prevent and monitor chemotherapy-related cardiotoxicity. Recently, a new dimension of the problem has emerged when drugs targeting the activity of certain tyrosine kinases or tumor receptors were recognized to carry an unwanted effect on the cardiovascular system. Moreover, the higher than expected incidence of cardiac dysfunction occurring in patients treated with a combination of old and new chemotherapeutics (e.g. anthracyclines and trastuzumab) prompted clinicians and researchers to find an effective approach to the problem. From the pharmacological standpoint, putative molecular mechanisms involved in chemotherapy-induced cardiotoxicity will be reviewed. From the clinical standpoint, current strategies to reduce cardiotoxicity will be critically addressed. In this perspective, the precise identification of the antitarget (i.e. the unwanted target causing heart damage) and the development of guidelines to monitor patients undergoing treatment with cardiotoxic agents appear to constitute the basis for the management of drug-induced cardiotoxicity.", "title": "" }, { "docid": "8c52c67dde20ce0a50ea22aaa4f917a5", "text": "This paper presents the vision of the Artificial Vision and Intelligent Systems Laboratory (VisLab) on future automated vehicles, ranging from sensor selection up to their extensive testing. VisLab's design choices are explained using the BRAiVE autonomous vehicle prototype as an example. BRAiVE, which is specifically designed to develop, test, and demonstrate advanced safety applications with different automation levels, features a high integration level and a low-cost sensor suite, which are mainly based on vision, as opposed to many other autonomous vehicle implementations based on expensive and invasive sensors. The importance of performing extensive tests to validate the design choices is considered to be a hard requirement, and different tests have been organized, including an intercontinental trip from Italy to China. This paper also presents the test, the main challenges, and the vehicles that have been specifically developed for this test, which was performed by four autonomous vehicles based on BRAiVE's architecture. This paper also includes final remarks on VisLab's perspective on future vehicles' sensor suite.", "title": "" }, { "docid": "f8bb40d16fc69283d52e5a51f5040214", "text": "With the potential to increase road safety and provide economic benefits, intelligent vehicles have elicited a significant amount of interest from both academics and industry. A robust and reliable vehicle detection and tracking system is one of the key modules for intelligent vehicles to perceive the surrounding environment. The millimeter-wave radar and the monocular camera are two vehicular sensors commonly used for vehicle detection and tracking. Despite their advantages, the drawbacks of these two sensors make them insufficient when used separately. Thus, the fusion of these two sensors is considered as an efficient way to address the challenge. This paper presents a collaborative fusion approach to achieve the optimal balance between vehicle detection accuracy and computational efficiency. The proposed vehicle detection and tracking design is extensively evaluated with a real-world data set collected by the developed intelligent vehicle. Experimental results show that the proposed system can detect on-road vehicles with 92.36% detection rate and 0% false alarm rate, and it only takes ten frames (0.16 s) for the detection and tracking of each vehicle. This system is installed on Kuafu-II intelligent vehicle for the fourth and fifth autonomous vehicle competitions, which is called “Intelligent Vehicle Future Challenge” in China.", "title": "" }, { "docid": "8fde46517d705da12fb43ce110a27a0f", "text": "Parabix (parallel bit streams for XML) is an open-source XML parser that employs the SIMD (single-instruction multiple-data) capabilities of modern-day commodity processors to deliver dramatic performance improvements over traditional byte-at-a-time parsing technology. Byte-oriented character data is first transformed to a set of 8 parallel bit streams, each stream comprising one bit per character code unit. Character validation, transcoding and lexical item stream formation are all then carried out in parallel using bitwise logic and shifting operations. Byte-at-a-time scanning loops in the parser are replaced by bit scan loops that can advance by as many as 64 positions with a single instruction.\n A performance study comparing Parabix with the open-source Expat and Xerces parsers is carried out using the PAPI toolkit. Total CPU cycle counts, level 2 data cache misses and branch mispredictions are measured and compared for each parser. The performance of Parabix is further studied with a breakdown of the cycle counts across the core components of the parser. Prospects for further performance improvements are also outlined, with a particular emphasis on leveraging the intraregister parallelism of SIMD processing to enable intrachip parallelism on multicore architectures.", "title": "" }, { "docid": "9c5711c68c7a9c7a4a8fc4d9dbcf145d", "text": "Approximate set membership data structures (ASMDSs) are ubiquitous in computing. They trade a tunable, often small, error rate ( ) for large space savings. The canonical ASMDS is the Bloom filter, which supports lookups and insertions but not deletions in its simplest form. Cuckoo filters (CFs), a recently proposed class of ASMDSs, add deletion support and often use fewer bits per item for equal . This work introduces the Morton filter (MF), a novel ASMDS that introduces several key improvements to CFs. Like CFs, MFs support lookups, insertions, and deletions, but improve their respective throughputs by 1.3× to 2.5×, 0.9× to 15.5×, and 1.3× to 1.6×. MFs achieve these improvements by (1) introducing a compressed format that permits a logically sparse filter to be stored compactly in memory, (2) leveraging succinct embedded metadata to prune unnecessary memory accesses, and (3) heavily biasing insertions to use a single hash function. With these optimizations, lookups, insertions, and deletions often only require accessing a single hardware cache line from the filter. These improvements are not at a loss in space efficiency, as MFs typically use comparable to slightly less space than CFs for the same . PVLDB Reference Format: Alex D. Breslow and Nuwan S. Jayasena. Morton Filters: Faster, Space-Efficient Cuckoo Filters via Biasing, Compression, and Decoupled Logical Sparsity. PVLDB, 11(9): 1041-1055, 2018. DOI: https://doi.org/10.14778/3213880.3213884", "title": "" }, { "docid": "0947b4a8336968eef8223cf820a9da99", "text": "This cross-sectional study showed that, although vegans had lower dietary calcium and protein intakes than omnivores, veganism did not have adverse effect on bone mineral density and did not alter body composition. Whether a lifelong vegetarian diet has any negative effect on bone health is a contentious issue. We undertook this study to examine the association between lifelong vegetarian diet and bone mineral density and body composition in a group of postmenopausal women. One hundred and five Mahayana Buddhist nuns and 105 omnivorous women (average age = 62, range = 50–85) were randomly sampled from monasteries in Ho Chi Minh City and invited to participate in the study. By religious rule, the nuns do not eat meat or seafood (i.e., vegans). Bone mineral density (BMD) at the lumbar spine (LS), femoral neck (FN), and whole body (WB) was measured by DXA (Hologic QDR 4500). Lean mass, fat mass, and percent fat mass were also obtained from the DXA whole body scan. Dietary calcium and protein intakes were estimated from a validated food frequency questionnaire. There was no significant difference between vegans and omnivores in LSBMD (0.74 ± 0.14 vs. 0.77 ± 0.14 g/cm2; mean ± SD; P = 0.18), FNBMD (0.62 ± 0.11 vs. 0.63 ± 0.11 g/cm2; P = 0.35), WBBMD (0.88 ± 0.11 vs. 0.90 ± 0.12 g/cm2; P = 0.31), lean mass (32 ± 5 vs. 33 ± 4 kg; P = 0.47), and fat mass (19 ± 5 vs. 19 ± 5 kg; P = 0.77) either before or after adjusting for age. The prevalence of osteoporosis (T scores ≤ −2.5) at the femoral neck in vegans and omnivores was 17.1% and 14.3% (P = 0.57), respectively. The median intake of dietary calcium was lower in vegans compared to omnivores (330 ± 205 vs. 682 ± 417 mg/day, P < 0.001); however, there was no significant correlation between dietary calcium and BMD. Further analysis suggested that whole body BMD, but not lumbar spine or femoral neck BMD, was positively correlated with the ratio of animal protein to vegetable protein. These results suggest that, although vegans have much lower intakes of dietary calcium and protein than omnivores, veganism does not have adverse effect on bone mineral density and does not alter body composition.", "title": "" }, { "docid": "966d8f17b750117effa81a64a0dba524", "text": "As the number of indoor location based services increase in the mobile market, service providers demanding more accurate indoor positioning information. To improve the location estimation accuracy, this paper presents an enhanced client centered location prediction scheme and filtering algorithm based on IEEE 802.11 ac standard for indoor positioning system. We proposed error minimization filtering algorithm for accurate location prediction and also introduce IEEE 802.11 ac mobile client centered location correction scheme without modification of preinstalled infrastructure Access Point. Performance evaluation based on MATLAB simulation highlight the enhanced location prediction performance. We observe a significant error reduction of angle estimation.", "title": "" }, { "docid": "4fd8b06d56841770804db122780a2483", "text": "Traumatic dental injuries (TDIs) of permanent teeth occur frequently in children and young adults. Crown fractures and luxations are the most commonly occurring of all dental injuries. Proper diagnosis, treatment planning and followup are important for improving a favorable outcome. Guidelines should assist dentists and patients in decision making and for providing the best care effectively and efficiently. The International Association of Dental Traumatology (IADT) has developed a consensus statement after a review of the dental literature and group discussions. Experienced researchers and clinicians from various specialties were included in the group. In cases where the data did not appear conclusive, recommendations were based on the consensus opinion of the IADT board members. The guidelines represent the best current evidence based on literature search and professional opinion. The primary goal of these guidelines is to delineate an approach for the immediate or urgent care of TDIs. In this first article, the IADT Guidelines for management of fractures and luxations of permanent teeth will be presented.", "title": "" }, { "docid": "f67746986d681cb5eb67ff76dbd4dc30", "text": "The creation of a complex web site is a thorny problem in user interface design. First, di erent visitors have distinct goals. Second, even a single visitor may have di erent needs at different times. Much of the information at the site may also be dynamic or time-dependent. Third, as the site grows and evolves, its original design may no longer be appropriate. Finally, a site may be designed for a particular purpose but used in unexpected ways. Web servers record data about user interactions and accumulate this data over time. We believe that AI techniques can be used to examine user access logs in order to automatically improve the site. We challenge the AI community to create adaptive web sites: sites that automatically improve their organization and presentation based on user access data. Several unrelated research projects in plan recognition, machine learning, knowledge representation, and user modeling have begun to explore aspects of this problem. We hope that posing this challenge explicitly will bring these projects together and stimulate fundamental AI research. Success would have a broad and highly visible impact on the web and the AI community.", "title": "" }, { "docid": "0bf3c08b71fedd629bdc584c3deeaa34", "text": "Unsupervised learning of linguistic structure is a difficult problem. A common approach is to define a generative model and maximize the probability of the hidden structure given the observed data. Typically, this is done using maximum-likelihood estimation (MLE) of the model parameters. We show using part-of-speech tagging that a fully Bayesian approach can greatly improve performance. Rather than estimating a single set of parameters, the Bayesian approach integrates over all possible parameter values. This difference ensures that the learned structure will have high probability over a range of possible parameters, and permits the use of priors favoring the sparse distributions that are typical of natural language. Our model has the structure of a standard trigram HMM, yet its accuracy is closer to that of a state-of-the-art discriminative model (Smith and Eisner, 2005), up to 14 percentage points better than MLE. We find improvements both when training from data alone, and using a tagging dictionary.", "title": "" }, { "docid": "d82e4f69cbf84875afbfc0872f7d37d8", "text": "Writing unit test code is labor-intensive, hence it is often not done as an integral part of programming. However, unit testing is a practical approach to increasing the correctness and quality of software; for example, the Extreme Programming approach relies on frequent unit testing. In this paper we present a new approach that makes writing unit tests easier. It uses a formal specification language’s runtime assertion checker to decide whether methods are working correctly, thus automating the writing of unit test oracles. These oracles can be easily combined with hand-written test data. Instead of writing testing code, the programmer writes formal specifications (e.g., preand postconditions). This makes the programmer’s task easier, because specifications are more concise and abstract than the equivalent test code, and hence more readable and maintainable. Furthermore, by using specifications in testing, specification errors are quickly discovered, so the specifications are more likely to provide useful documentation and inputs to other tools. We have implemented this idea using the Java Modeling Language (JML) and the JUnit testing framework, but the approach could be easily implemented with other combinations of formal specification languages and unit test tools.", "title": "" }, { "docid": "e976c416ffb58f92d7e8853f831e0887", "text": "OBJECTIVE\nTo establish whether bilateral standing with visual feedback therapy after stroke improves postural control compared with conventional therapy and to evaluate the generalization of the effects of visual feedback therapy on gait and gait-related activities.\n\n\nDESIGN\nA systematic review.\n\n\nMETHODS\nA computer-aided literature search was performed. Randomized controlled trials and controlled clinical trials, comparing visual feedback therapy with conventional balance treatments were included up to April 2005. The methodological quality of each study was assessed with the the Physiotherapy Evidence Database scale. Depending on existing heterogeneity, studies with a common variable of outcome were pooled by calculating the summary effect-sizes using fixed or random effects models.\n\n\nRESULTS\nEight out of 78 studies, presenting 214 subjects, were included for qualitative and quantitative analysis. The methodological quality ranged from 3 to 6 points. The meta-analysis demonstrated non-significant summary effect-sizes in favour of visual feedback therapy for weight distribution and postural sway, as well as balance and gait performance, and gait speed.\n\n\nCONCLUSION\nThe additional value of visual feedback therapy in bilateral standing compared with conventional therapy shows no statistically significant effects on symmetry of weight distribution between paretic and non-paretic leg, postural sway in bilateral standing, gait and gait-related activities. Visual feedback therapy should not be favoured over conventional therapy. The question remains as to exactly how asymmetry in weight distribution while standing is related to balance control in patients with stroke.", "title": "" }, { "docid": "46fe86de189eba0df238cdb65ee4fe2a", "text": "The linkage of ImageNet WordNet synsets to Wikidata items will leverage deep learning algorithm with access to a rich multilingual knowledge graph. Here I will describe our ongoing efforts in linking the two resources and issues faced in matching the Wikidata and WordNet knowledge graphs. I show an example on how the linkage can be used in a deep learning setting with real-time image classification and labeling in a non-English language and discuss what opportunities lies ahead.", "title": "" }, { "docid": "5a1a40a965d05d0eb898d9ff5595618c", "text": "BACKGROUND\nKeratosis pilaris is a common skin disorder of childhood that often improves with age. Less common variants of keratosis pilaris include keratosis pilaris atrophicans and atrophodermia vermiculata.\n\n\nOBSERVATIONS\nIn this case series from dermatology practices in the United States, Canada, Israel, and Australia, the clinical characteristics of 27 patients with keratosis pilaris rubra are described. Marked erythema with follicular prominence was noted in all patients, most commonly affecting the lateral aspects of the cheeks and the proximal arms and legs, with both more marked erythema and widespread extent of disease than in keratosis pilaris. The mean age at onset was 5 years (range, birth to 12 years). Sixty-three percent of patients were male. No patients had atrophy or scarring from their lesions. Various treatments were used, with minimal or no improvement in most cases.\n\n\nCONCLUSIONS\nKeratosis pilaris rubra is a variant of keratosis pilaris, with more prominent erythema and with more widespread areas of skin involvement in some cases, but without the atrophy or hyperpigmentation noted in certain keratosis pilaris variants. It seems to be a relatively common but uncommonly reported condition.", "title": "" }, { "docid": "79b3ed4c5e733c73b5e7ebfdf6069293", "text": "This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, metal etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.", "title": "" }, { "docid": "046148901452aefdc5a14357ed89cbd3", "text": "Of late, there has been an avalanche of cross-layer design proposals for wireless networks. A number of researchers have looked at specific aspects of network performance and, approaching cross-layer design via their interpretation of what it implies, have presented several cross-layer design proposals. These proposals involve different layers of the protocol stack, and address both cellular and ad hoc networks. There has also been work relating to the implementation of cross-layer interactions. It is high time that these various individual efforts be put into perspective and a more holistic view be taken. In this article, we take a step in that direction by presenting a survey of the literature in the area of cross-layer design, and by taking stock of the ongoing work. We suggest a definition for cross-layer design, discuss the basic types of cross-layer design with examples drawn from the literature, and categorize the initial proposals on how cross-layer interactions may be implemented. We then highlight some open challenges and new opportunities for cross-layer design. Designers presenting cross-layer design proposals can start addressing these as they move ahead.", "title": "" }, { "docid": "f418de25bd2a60526eeb551b5272b6e0", "text": "In a world where security issues have been gaining growing importance, face recognition systems have attracted increasing attention in multiple application areas, ranging from forensics and surveillance to commerce and entertainment. To help understanding the landscape and abstraction levels relevant for face recognition systems, face recognition taxonomies allow a deeper dissection and comparison of the existing solutions. This paper proposes a new, more encompassing and richer multi-level face recognition taxonomy, facilitating the organization and categorization of available and emerging face recognition solutions; this taxonomy may also guide researchers in the development of more efficient face recognition solutions. The proposed multi-level taxonomy considers levels related to the face structure, feature support and feature extraction approach. Following the proposed taxonomy, a comprehensive survey of representative face recognition solutions is presented. The paper concludes with a discussion on current algorithmic and application related challenges which may define future research directions for face recognition.", "title": "" }, { "docid": "33906623c1ac445e18a30805d2a122cf", "text": "Diagnostic problems abound for individuals, organizations, and society. The stakes are high, often life and death. Such problems are prominent in the fields of health care, public safety, business, environment, justice, education, manufacturing, information processing, the military, and government. Particular diagnostic questions are raised repetitively, each time calling for a positive or negative decision about the presence of a given condition or the occurrence (often in the future) of a given event. Consider the following illustrations: Is a cancer present? Will this individual commit violence? Are there explosives in this luggage? Is this aircraft fit to fly? Will the stock market advance today? Is this assembly-line item flawed? Will an impending storm strike? Is there oil in the ground here? Is there an unsafe radiation level in my house? Is this person lying? Is this person using drugs? Will this applicant succeed? Will this book have the information I need? Is that plane intending to attack this ship? Is this applicant legally disabled? Does this tax return justify an audit? Each time such a question is raised, the available evidence is assessed by a person or a device or a combination of the two, and a choice is then made between the two alternatives, yes or no. The evidence may be a x-ray, a score on a psychiatric test, a chemical analysis, and so on. In considering just yes–no alternatives, such diagnoses do not exhaust the types of diagnostic questions that exist. Other questions, for example, a differential diagnosis in medicine, may require considering a half dozen or more possible alternatives. Decisions of the yes–no type, however, are prevalent and important, as the foregoing examples suggest, and they are the focus of our analysis. We suggest that diagnoses of this type rest on a general process with common characteristics across fields, and that the process warrants scientific analysis as a discipline in its own right (Swets, 1988, 1992). The main purpose of this article is to describe two ways, one obvious and one less obvious, in which diagnostic performance can be improved. The more obvious way to improve diagnosis is to improve its accuracy, that is, its ability to distinguish between the two diagnostic alternatives and to select the correct one. The less obvious way to improve diagnosis is to increase the utility of the diagnostic decisions that are made. That is, apart from improving accuracy, there is a need to produce decisions that are in tune both with the situational probabilities of the alternative diagnostic conditions and with the benefits and costs, respectively, of correct and incorrect decisions. Methods exist to achieve both goals. These methods depend on a measurement technique that separately and independently quantifies the two aspects of diagnostic performance, namely, its accuracy and the balance it provides among the various possible types of decision outcomes. We propose that together the method for measuring diagnostic performance and the methods for improving it constitute the fundamentals of a science of diagnosis. We develop the idea that this incipient discipline has been demonstrated to improve diagnosis in several fields, but is nonetheless virtually unknown and unused in others. We consider some possible reasons for the disparity between the general usefulness of the methods and their lack of general use, and we advance some ideas for reducing this disparity. To anticipate, we develop two successful examples of these methods in some detail: the prognosis of violent behavior and the diagnosis of breast and prostate cancer. We treat briefly other successful examples, such as weather forecasting and admission to a selective school. We also develop in detail two examples of fields that would markedly benefit from application of the methods, namely the detection of cracks in airplane wings and the detection of the virus of AIDS. Briefly treated are diagnoses of dangerous conditions for in-flight aircraft and of behavioral impairments that qualify as disabilities in individuals.", "title": "" }, { "docid": "15e11fccdd4088cce78118b8d14be05b", "text": "This paper presents a comparative performance analysis of wireless orthogonal frequency division multiplexing (OFDM) system with the implementation of comb type pilot-based channel estimation algorithm over frequency selective multi-path fading channels. The Minimum Mean Square Error (MMSE) method is used for the estimation of channel at pilot frequencies. For the estimation of channel at data frequencies different interpolation techniques such as low-pass, linear, and second order interpolation are employed. The OFDM system simulation has been carried out with Matlab and the performance is analyzed in terms of bit error rate (BER) for various signal mapping (BPSK, QPSK, 4QAM, 16QAM, and 64QAM) and channel (Rayleigh and Rician) conditions. The impact of selecting number of channel taps on the BER performance is also investigated.", "title": "" }, { "docid": "0320ebc09663ecd6bf5c39db472fcbde", "text": "The human visual system is capable of learning an unbounded number of facts from images including not only objects but also their attributes, actions and interactions. Such uniform understanding of visual facts has not received enough attention. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand an unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., <boy>), (2) attributes (e.g., <boy, tall>), (3) actions (e.g., <boy, playing>), and (4) interactions (e.g., <boy, riding, a horse >). Each fact has a language view (e.g., < boy, playing>) and a visual view (an image). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce a structured embedding model. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of > 202,000 facts and 814,000 images. Our results show the advantage of relating facts by the structure by the proposed model compared to the baselines.", "title": "" } ]
scidocsrr
ef0e273c553a89ec0a4a0418e863b23c
A CNN Accelerator on FPGA Using Depthwise Separable Convolution
[ { "docid": "bb29a8e942c69cdb6634faa563cddb3a", "text": "Convolutional neural network (CNN) finds applications in a variety of computer vision applications ranging from object recognition and detection to scene understanding owing to its exceptional accuracy. There exist different algorithms for CNNs computation. In this paper, we explore conventional convolution algorithm with a faster algorithm using Winograd's minimal filtering theory for efficient FPGA implementation. Distinct from the conventional convolution algorithm, Winograd algorithm uses less computing resources but puts more pressure on the memory bandwidth. We first propose a fusion architecture that can fuse multiple layers naturally in CNNs, reusing the intermediate data. Based on this fusion architecture, we explore heterogeneous algorithms to maximize the throughput of a CNN. We design an optimal algorithm to determine the fusion and algorithm strategy for each layer. We also develop an automated toolchain to ease the mapping from Caffe model to FPGA bitstream using Vivado HLS. Experiments using widely used VGG and AlexNet demonstrate that our design achieves up to 1.99X performance speedup compared to the prior fusion-based FPGA accelerator for CNNs.", "title": "" } ]
[ { "docid": "665fcc17971dc34ed6f89340e3b7bfe2", "text": "Central to the development of computer vision systems is the collection and use of annotated images spanning our visual world. Annotations may include information about the identity, spatial extent, and viewpoint of the objects present in a depicted scene. Such a database is useful for the training and evaluation of computer vision systems. Motivated by the availability of images on the Internet, we introduced a web-based annotation tool that allows online users to label objects and their spatial extent in images. To date, we have collected over 400 000 annotations that span a variety of different scene and object classes. In this paper, we show the contents of the database, its growth over time, and statistics of its usage. In addition, we explore and survey applications of the database in the areas of computer vision and computer graphics. Particularly, we show how to extract the real-world 3-D coordinates of images in a variety of scenes using only the user-provided object annotations. The output 3-D information is comparable to the quality produced by a laser range scanner. We also characterize the space of the images in the database by analyzing 1) statistics of the co-occurrence of large objects in the images and 2) the spatial layout of the labeled images.", "title": "" }, { "docid": "376369f5e8e9b91de8e9a188d499c740", "text": "Vision based bin picking is increasingly more di cult as the complexity of target objects increases We propose an e cient solution where complex objects are su ciently represented by simple features cues thus invariance to object complexity is established The re gion extraction algorithm utilized in our approach is capable of providing the focus of attention to the simple cues as a trigger toward recognition and pose estima tion Successful bin picking experiments of industrial objects using stereo vision tools are presented", "title": "" }, { "docid": "308e06ce00b1dfaf731b1a91e7c56836", "text": "OBJECTIVE\nTo systematically review the literature regarding how statistical process control--with control charts as a core tool--has been applied to healthcare quality improvement, and to examine the benefits, limitations, barriers and facilitating factors related to such application.\n\n\nDATA SOURCES\nOriginal articles found in relevant databases, including Web of Science and Medline, covering the period 1966 to June 2004.\n\n\nSTUDY SELECTION\nFrom 311 articles, 57 empirical studies, published between 1990 and 2004, met the inclusion criteria.\n\n\nMETHODS\nA standardised data abstraction form was used for extracting data relevant to the review questions, and the data were analysed thematically.\n\n\nRESULTS\nStatistical process control was applied in a wide range of settings and specialties, at diverse levels of organisation and directly by patients, using 97 different variables. The review revealed 12 categories of benefits, 6 categories of limitations, 10 categories of barriers, and 23 factors that facilitate its application and all are fully referenced in this report. Statistical process control helped different actors manage change and improve healthcare processes. It also enabled patients with, for example asthma or diabetes mellitus, to manage their own health, and thus has therapeutic qualities. Its power hinges on correct and smart application, which is not necessarily a trivial task. This review catalogs 11 approaches to such smart application, including risk adjustment and data stratification.\n\n\nCONCLUSION\nStatistical process control is a versatile tool which can help diverse stakeholders to manage change in healthcare and improve patients' health.", "title": "" }, { "docid": "e4d1053a64a09a02f4890af66b28bbba", "text": "Branchio-oculo-facial syndrome (BOFS) is a rare autosomal dominant condition with variable expressivity, caused by mutations in the TFAP2A gene. We report a three generational family with four affected individuals. The consultand has typical features of BOFS including infra-auricular skin nodules, coloboma, lacrimal duct atresia, cleft lip, conductive hearing loss and typical facial appearance. She also exhibited a rare feature of preaxial polydactyly. Her brother had a lethal phenotype with multiorgan failure. We also report a novel variant in TFAP2A gene. This family highlights the variable severity of BOFS and, therefore, the importance of informed genetic counselling in families with BOFS.", "title": "" }, { "docid": "545d566dff3d4c4ace8dcd26040db3a2", "text": "In this paper, we first define our research problem as to detect collusive spammers in online review communities. Next we present our current progress on this topic, in which we have spotted anomalies by evaluating 15 behavioral features proposed in the state-of-the-art approaches. Then we propose a novel hybrid classification/clustering method to detect colluders in our dataset based on selected informative features. Experimental results show that our method promisingly improve the performance of traditional classifiers by incorporating clustering for the smoothing. Finally, possible extensions of our current work and challenges in achieving them are discussed as our future directions.", "title": "" }, { "docid": "60eb7099d3f871ed9f150ca3445aa18e", "text": "We have developed a finger-shaped sensor array (BioTac ) that provides simultaneous information about contact forces, microvibrations and thermal fluxes, mimicking the full cutaneous sensory capabilities of the human finger. For many tasks, such as identifying objects or maintaining stable grasp, these sensory modalities are synergistic. For example, information about the material composition of an object can be inferred from the rate of heat transfer from a heated finger to the object, but only if the location and force of contact are well controlled. In this chapter we introduce the three sensing modalities of our sensor and consider how they can be used synergistically. Tactile sensing and signal processing is necessary for human dexterity and is likely to be required in mechatronic systems such as robotic and prosthetic limbs if they are to achieve similar dexterity.", "title": "" }, { "docid": "af29a155a5afdb5b1a0c055d1fcb8f32", "text": "The cyclic redundancy check (CRC) is a popular error detection code (EDC) used in many digital transmission and storage protocols. Most existing digit-serial hardware CRC computation architectures are based on one of the two well-known bit-serial CRC linear feedback shift register (LFSR) architectures. In this paper, we present and investigate a generalized CRC formulation that incorporates negative degree terms. Through software simulations, we identify useful formulations that result in reduced time and/or area complexity CRC circuits compared to the existing non-retimed approaches. Implementation results on an Altera field-programmable gate array (FPGA) device are reported. We conclude that the proposed approach is most effective when the digit size is greater than the generator polynomial degree.", "title": "" }, { "docid": "8241a9425a09648a0ecdaf4190c7c07c", "text": "Colonoscopy has contributed to a marked decline in the number of colorectal cancer related deaths. However, recent data suggest that there is a significant (4-12%) miss-rate for the detection of even large polyps and cancers. To address this, we have been investigating an ‘automated feedback system’ which informs the endoscopist of possible sub-optimal inspection during colonoscopy. A fundamental step of this system is to distinguish non-informative frames from informative ones. Existing methods for this cannot classify water/ bubble frames as non-informative even though they do not carry any useful visual information of the colon mucosa. In this paper, we propose a novel texture feature based on accumulation of pixel differences, which can detect water and bubble frames with very high accuracy with significantly less processing time. The experimental results show the proposed feature can achieve more than 93% overall accuracy in almost half of the processing time the existing methods take.", "title": "" }, { "docid": "33296736553ceaab2e113b62c05a803c", "text": "In cases of child abuse, usually, the parents are initial suspects. A common explanation of the parents is that the injuries were caused by a sibling. Child-on-child violence is reported to be very rare in children less than 5 years of age, and thorough investigation by the police, child protective services, and medicolegal examinations are needed to proof or disproof the parents' statement. We report two cases of physical abuse of infants by small children.", "title": "" }, { "docid": "4851b83b4ef6efa36777c28be8548c8d", "text": "The finite element methodology has become a standard framework for approximating the solution to the Poisson-Boltzmann equation in many biological applications. In this article, we examine the numerical efficacy of least-squares finite element methods for the linearized form of the equations. In particular, we highlight the utility of a first-order form, noting optimality, control of the flux variables, and flexibility in the formulation, including the choice of elements. We explore the impact of weighting and the choice of elements on conditioning and adaptive refinement. In a series of numerical experiments, we compare the finite element methods when applied to the problem of computing the solvation free energy for realistic molecules of varying size.", "title": "" }, { "docid": "08ecf17772853fe198c96837d43cf572", "text": "Long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) interventions can reduce malaria transmission by targeting mosquitoes when they feed upon sleeping humans and/or when they rest inside houses, livestock shelters or other man-made structures. However, many malaria vector species can maintain robust transmission, despite high coverage of LLINs/IRS containing insecticides to which they are physiologically fully susceptible, because they exhibit one or more behaviours that define the biological limits of achievable impact with these interventions: (1) natural or insecticide-induced avoidance of contact with treated surfaces within houses and early exit from them, minimizing exposure hazard of vectors which feed indoors upon humans, (2) feeding upon humans when they are active and unprotected outdoors, attenuating personal protection and any consequent community-wide suppression of transmission, (3) feeding upon animals, minimizing contact with insecticides targeted at humans or houses, (4) resting outdoors, away from insecticide-treated surfaces of nets, walls and roofs. Residual malaria transmission is therefore defined as all forms of transmission that can persist after achieving full population-wide coverage with effective LLIN and/or IRS containing active ingredients to which local vector populations are fully susceptible. Residual transmission is sufficiently intense across most of the tropics to render malaria elimination infeasible without new or improved vector control methods. Many novel or improved vector control strategies to address residual transmission are emerging that either (1) enhance control of adult vectors that enter houses to feed and/or rest by killing, repelling or excluding them, (2) kill or repel adult mosquitoes when they attack people outdoors, (3) kill adult mosquitoes when they attack livestock, (4) kill adult mosquitoes when they feed upon sugar, or (5) kill immature mosquitoes at aquatic habitats. However, none of these options has sufficient supporting evidence to justify full-scale programmatic implementation so concerted investment in their rigorous selection, development and evaluation is required over the coming decade to enable control and, ultimately, elimination of residual malaria transmission. In the meantime, national programmes may assess options for addressing residual transmission under programmatic conditions through exploratory pilot studies with strong monitoring, evaluation and operational research components, similarly to the Onchocerciasis Control Programme.", "title": "" }, { "docid": "574838d3fecf8e8dfc4254b41d446ad2", "text": "This paper proposes a new procedure to detect Glottal Closure and Opening Instants (GCIs and GOIs) directly from speech waveforms. The procedure is divided into two successive steps. First a mean-based signal is computed, and intervals where speech events are expected to occur are extracted from it. Secondly, at each interval a precise position of the speech event is assigned by locating a discontinuity in the Linear Prediction residual. The proposed method is compared to the DYPSA algorithm on the CMU ARCTIC database. A significant improvement as well as a better noise robustness are reported. Besides, results of GOI identification accuracy are promising for the glottal source characterization.", "title": "" }, { "docid": "044a73d9db2f61dc9b4f9de0bdaa1b3f", "text": "Traditionally employed human-to-human and human-to-machine communication has recently been replaced by a new trend known as the Internet of things (IoT). IoT enables device-to-device communication without any human intervention, hence, offers many challenges. In this paradigm, machine’s self-sustainability due to limited energy capabilities presents a great challenge. Therefore, this paper proposed a low-cost energy harvesting device using rectenna to mitigate the problem in the areas where battery constraint issues arise. So, an energy harvester is designed, optimized, fabricated, and characterized for energy harvesting and IoT applications which simply recycles radio-frequency (RF) energy at 2.4 GHz, from nearby Wi-Fi/WLAN devices and converts them to useful dc power. The physical model comprises of antenna, filters, rectifier, and so on. A rectangular patch antenna is designed and optimized to resonate at 2.4 GHz using the well-known transmission-line model while the band-pass and low-pass filters are designed using lumped components. Schottky diode (HSMS-2820) is used for rectification. The circuit is designed and fabricated using the low-cost FR4 substrate (<inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula> = 16 mm and <inline-formula> <tex-math notation=\"LaTeX\">$\\varepsilon _{r} = 4.6$ </tex-math></inline-formula>) having the fabricated dimensions of 285 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times \\,\\,90$ </tex-math></inline-formula> mm. Universal software radio peripheral and GNU Radio are employed to measure the received RF power, while similar measurements are carried out using R&S spectrum analyzer for validation. The received measured power is −64.4 dBm at the output port of the rectenna circuit. Hence, our design enables a pervasive deployment of self-operable next-generation IoT devices.", "title": "" }, { "docid": "41cc4f54df2533897cc678db9818902b", "text": "Financial statement fraud has reached the epidemic proportion globally. Recently, financial statement fraud has dominated the corporate news causing debacle at number of companies worldwide. In the wake of failure of many organisations, there is a dire need of prevention and detection of financial statement fraud. Prevention of financial statement fraud is a measure to stop its occurrence initially whereas detection means the identification of such fraud as soon as possible. Fraud detection is required only if prevention has failed. Therefore, a continuous fraud detection mechanism should be in place because management may be unaware about the failure of prevention mechanism. In this paper we propose a data mining framework for prevention and detection of financial statement fraud.", "title": "" }, { "docid": "a0d1d59fc987d90e500b3963ac11b2ad", "text": "The purpose of this paper is to present the applicability of THOMAS, an architecture specially designed to model agent-based virtual organizations, in the development of a multiagent system for managing and planning routes for clients in a mall. In order to build virtual organizations, THOMAS offers mechanisms to take into account their structure, behaviour, dynamic, norms and environment. Moreover, one of the primary characteristics of the THOMAS architecture is the use of agents with reasoning and planning capabilities. These agents can perform a dynamic reorganization when they detect changes in the environment. The proposed architecture is composed of a set of related modules that are appropriate for developing systems in highly volatile environments similar to the one presented in this study. This paper presents THOMAS as well as the results obtained after having applied the system to a case study. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ead461ea8f716f6fab42c08bb7b54728", "text": "Despite the increasing importance of data quality and the rich theoretical and practical contributions in all aspects of data cleaning, there is no single end-to-end off-the-shelf solution to (semi-)automate the detection and the repairing of violations w.r.t. a set of heterogeneous and ad-hoc quality constraints. In short, there is no commodity platform similar to general purpose DBMSs that can be easily customized and deployed to solve application-specific data quality problems. In this paper, we present NADEEF, an extensible, generalized and easy-to-deploy data cleaning platform. NADEEF distinguishes between a programming interface and a core to achieve generality and extensibility. The programming interface allows the users to specify multiple types of data quality rules, which uniformly define what is wrong with the data and (possibly) how to repair it through writing code that implements predefined classes. We show that the programming interface can be used to express many types of data quality rules beyond the well known CFDs (FDs), MDs and ETL rules. Treating user implemented interfaces as black-boxes, the core provides algorithms to detect errors and to clean data. The core is designed in a way to allow cleaning algorithms to cope with multiple rules holistically, i.e. detecting and repairing data errors without differentiating between various types of rules. We showcase two implementations for core repairing algorithms. These two implementations demonstrate the extensibility of our core, which can also be replaced by other user-provided algorithms. Using real-life data, we experimentally verify the generality, extensibility, and effectiveness of our system.", "title": "" }, { "docid": "01202e09e54a1fc9f5b36d67fbbf3870", "text": "This paper is intended to investigate the copper-graphene surface plasmon resonance (SPR)-based biosensor by considering the high adsorption efficiency of graphene. Copper (Cu) is used as a plasmonic material whereas graphene is used to prevent Cu from oxidation and enhance the reflectance intensity. Numerical investigation is performed using finite-difference-time-domain (FDTD) method by comparing the sensing performance such as reflectance intensity that explains the sensor sensitivity and the full-width-at-half-maximum (FWHM) of the spectrum for detection accuracy. The measurements were observed with various Cu thin film thicknesses ranging from 20nm to 80nm with 785nm operating wavelength. The proposed sensor shows that the 40nm-thick Cu-graphene (1 layer) SPR-based sensor gave better performance with narrower plasmonic spectrum line width (reflectance intensity of 91.2%) and better FWHM of 3.08°. The measured results also indicate that the Cu-graphene SPR-based sensor is suitable for detecting urea with refractive index of 1.49 in dielectric medium.", "title": "" }, { "docid": "73c7c4ddfa01fb2b14c6a180c3357a55", "text": "Neurodevelopmental treatment according to Dr. K. and B. Bobath can be supplemented by hippotherapy. At proper control and guidance, an improvement in posture tone, inhibition of pathological movement patterns, facilitation of normal automatical reactions and the promotion of sensorimotor perceptions is achieved. By adjustment to the swaying movements of the horse, the child feels how to retain straightening alignment, symmetry and balance. By pleasure in this therapy, the child can be motivated to satisfactory cooperation and accepts the therapy horse as its friend. The results of hippotherapy for 27 children afflicted with cerebral palsy permit a conclusion as to the value of this treatment for movement and behaviour disturbance to the drawn.", "title": "" }, { "docid": "834a5cb9f2948443fbb48f274e02ca9c", "text": "The Carnegie Mellon Communicator is a telephone-based dialog system that supports planning in a travel domain. The implementation of such a system requires two complimentary components, an architecture capable of managing interaction and the task, as well as a knowledge base that captures the speech, language and task characteristics specific to the domain. Given a suitable architecture, the principal effort in development in taken up in the acquisition and processing of a domain knowledge base. This paper describes a variety of techniques we have applied to modeling in acoustic, language, task, generation and synthesis components of the system.", "title": "" }, { "docid": "ef9235285ebbef109254bfb5968d2d6b", "text": "This paper proposes Dyadic Memory Networks (DyMemNN), a novel extension of end-to-end memory networks (memNN) for aspect-based sentiment analysis (ABSA). Originally designed for question answering tasks, memNN operates via a memory selection operation in which relevant memory pieces are adaptively selected based on the input query. In the problem of ABSA, this is analogous to aspects and documents in which the relationship between each word in the document is compared with the aspect vector. In the standard memory networks, simple dot products or feed forward neural networks are used to model the relationship between aspect and words which lacks representation learning capability. As such, our dyadic memory networks ameliorates this weakness by enabling rich dyadic interactions between aspect and word embeddings by integrating either parameterized neural tensor compositions or holographic compositions into the memory selection operation. To this end, we propose two variations of our dyadic memory networks, namely the Tensor DyMemNN and Holo DyMemNN. Overall, our two models are end-to-end neural architectures that enable rich dyadic interaction between aspect and document which intuitively leads to better performance. Via extensive experiments, we show that our proposed models achieve the state-of-the-art performance and outperform many neural architectures across six benchmark datasets.", "title": "" } ]
scidocsrr
fb2562254cc8426a3fc56f398e2f939a
An evaluation study of driver profiling fuzzy algorithms using smartphones
[ { "docid": "3a3e872846f997f6a8400ae6e7612a40", "text": "In this paper, we propose an approach to understand the driver behavior using smartphone sensors. The aim for analyzing the sensory data acquired using a smartphone is to design a car-independent system which does not need vehicle mounted sensors measuring turn rates, gas consumption or tire pressure. The sensory data utilized in this paper includes the accelerometer, gyroscope and the magnetometer. Using these sensors we obtain position, speed, acceleration, deceleration and deflection angle sensory information and estimate commuting safety by statistically analyzing driver behavior. In contrast to state of the art, this work uses no external sensors, resulting in a cost efficient, simplistic and user-friendly system.", "title": "" }, { "docid": "97ae5a8fec6175f3065f2612dce2e493", "text": "In spite of several technical advances made in recent years by the automotive industry, the driver's behaviour still influences significantly the overall fuel consumption. With the rise of smartphones adoption, there are also new opportunities to increase the awareness to this issue. The main aim of this paper is to present a new smartphone application that will help drivers reduce the fuel consumption of their vehicles. This is accomplished by using the smartphone's sensors and the vehicle state to detect the driving pattern and suggest new behaviours in real time that will lead to a more efficient driving experience. The preliminary results show the potential for significant energy savings and their relevance for changing the drivers' behaviour.", "title": "" }, { "docid": "e9746cc48624d7ce494af43e3ff56cb3", "text": "Driving while being tired or distracted is dangerous. We are developing the CafeSafe app for Android phones, which fuses information from both front and back cameras and others embedded sensors on the phone to detect and alert drivers to dangerous driving conditions in and outside of the car. CarSafe uses computer vision and machine learning algorithms on the phone to monitor and detect whether the driver is tired or distracted using the front camera while at the same time tracking road conditions using the back camera. CarSafe is the first dual-camera application for smart-phones.", "title": "" } ]
[ { "docid": "26d6ffbc4ee2e0f5e3e6699fd33bdc5f", "text": "We present a method for efficient learning of control policies for multiple related robotic motor skills. Our approach consists of two stages, joint training and specialization training. During the joint training stage, a neural network policy is trained with minimal information to disambiguate the motor skills. This forces the policy to learn a common representation of the different tasks. Then, during the specialization training stage we selectively split the weights of the policy based on a per-weight metric that measures the disagreement among the multiple tasks. By splitting part of the control policy, it can be further trained to specialize to each task. To update the control policy during learning, we use Trust Region Policy Optimization with Generalized Advantage Function (TRPOGAE). We propose a modification to the gradient update stage of TRPO to better accommodate multi-task learning scenarios. We evaluate our approach on three continuous motor skill learning problems in simulation: 1) a locomotion task where three single legged robots with considerable difference in shape and size are trained to hop forward, 2) a manipulation task where three robot manipulators with different sizes and joint types are trained to reach different locations in 3D space, and 3) locomotion of a two-legged robot, whose range of motion of one leg is constrained in different ways. We compare our training method to three baselines. The first baseline uses only jointtraining for the policy, the second trains independent policies for each task, and the last randomly selects weights to split. We show that our approach learns more efficiently than each of the baseline methods.", "title": "" }, { "docid": "07cb7c48a534cc002c5088225a540b1e", "text": "OBJECTIVES\nThe Health Information Technology for Economic and Clinical Health (HITECH) Act created incentives for adopting electronic health records (EHRs) for some healthcare organisations, but long-term care (LTC) facilities are excluded from those incentives. There are realisable benefits of EHR adoption in LTC facilities; however, there is limited research about this topic. The purpose of this systematic literature review is to identify EHR adoption factors for LTC facilities that are ineligible for the HITECH Act incentives.\n\n\nSETTING\nWe conducted systematic searches of Cumulative Index of Nursing and Allied Health Literature (CINAHL) Complete via Ebson B. Stephens Company (EBSCO Host), Google Scholar and the university library search engine to collect data about EHR adoption factors in LTC facilities since 2009.\n\n\nPARTICIPANTS\nSearch results were filtered by date range, full text, English language and academic journals (n=22).\n\n\nINTERVENTIONS\nMultiple members of the research team read each article to confirm applicability and study conclusions.\n\n\nPRIMARY AND SECONDARY OUTCOME MEASURES\nResearchers identified common themes across the literature: specifically facilitators and barriers to adoption of the EHR in LTC.\n\n\nRESULTS\nResults identify facilitators and barriers associated with EHR adoption in LTC facilities. The most common facilitators include access to information and error reduction. The most prevalent barriers include initial costs, user perceptions and implementation problems.\n\n\nCONCLUSIONS\nSimilarities span the system selection phases and implementation process; of those, cost was the most common mentioned. These commonalities should help leaders in LTC facilities align strategic decisions to EHR adoption. This review may be useful for decision-makers attempting successful EHR adoption, policymakers trying to increase adoption rates without expanding incentives and vendors that produce EHRs.", "title": "" }, { "docid": "c52d8ce6b7f478826b9a7727db13ed81", "text": "Previous studies indicate that the way we perceive our bodily signals, such as our heart rate, can influence how we feel. Inspired by these studies, we built EmotionCheck, which is a wearable device that can change users' perception of their heart rate through subtle vibrations on the wrist. The results of an experiment with 67 participants show that the EmotionCheck device can help users regulate their anxiety through false feedback of a slow heart rate.", "title": "" }, { "docid": "e04cccfd59c056678e39fc4aed0eaa2b", "text": "BACKGROUND\nBreast cancer is by far the most frequent cancer of women. However the preventive measures for such problem are probably less than expected. The objectives of this study are to assess breast cancer knowledge and attitudes and factors associated with the practice of breast self examination (BSE) among female teachers of Saudi Arabia.\n\n\nPATIENTS AND METHODS\nWe conducted a cross-sectional survey of teachers working in female schools in Buraidah, Saudi Arabia using a self-administered questionnaire to investigate participants' knowledge about the risk factors of breast cancer, their attitudes and screening behaviors. A sample of 376 female teachers was randomly selected. Participants lived in urban areas, and had an average age of 34.7 ±5.4 years.\n\n\nRESULTS\nMore than half of the women showed a limited knowledge level. Among participants, the most frequently reported risk factors were non-breast feeding and the use of female sex hormones. The printed media was the most common source of knowledge. Logistic regression analysis revealed that high income was the most significant predictor of better knowledge level. Knowing a non-relative case with breast cancer and having a high knowledge level were identified as the significant predictors for practicing BSE.\n\n\nCONCLUSIONS\nThe study points to the insufficient knowledge of female teachers about breast cancer and identified the negative influence of low knowledge on the practice of BSE. Accordingly, relevant educational programs to improve the knowledge level of women regarding breast cancer are needed.", "title": "" }, { "docid": "5baf5eb1c98a06ccf129fc65f539ea35", "text": "In this paper, we propose to incorporate topic aspects information for online comments convincingness evaluation. Our model makes use of graph convolutional network to utilize implicit topic information within a discussion thread to assist the evaluation of convincingness of each single comment. In order to test the effectiveness of our proposed model, we annotate topic information on top of a public dataset for argument convincingness evaluation. Experimental results show that topic information is able to improve the performance for convincingness evaluation. We also make a move to detect topic aspects automatically.", "title": "" }, { "docid": "7b18a6f76773b3f35df7da23e3a0dd1c", "text": "In todays world, health data are being produced in everincreasing amounts due to extensive use of medical devices generating data in digital form. These data are stored in diverse formats at different health information systems. Medical practitioners and researchers can be benefited significantly if these massive heterogeneous data could be integrated and made accessible through a common platform. On the other hand, digital health data containing protected health information (PHI) are the main target of the cybercriminals. In this paper, we have provided a state of the art review of the security threats in the integrated healthcare information systems. According to our analysis, healthcare data servers are leading target of the hackers because of monetary value. At present, attacks on healthcare organizations’ data are 1.25 times higher compared to five years ago. We have provided some important recommendations to minimize the risk of attacks and to reduce the chance of compromising patients’ privacy after any successful attack.", "title": "" }, { "docid": "048f237ad6cb844a79c63d7f6f3d6aa9", "text": "Superpixel segmentation has emerged as an important research problem in the areas of image processing and computer vision. In this paper, we propose a framework, namely Iterative Spanning Forest (ISF), in which improved sets of connected superpixels (supervoxels in 3D) can be generated by a sequence of Image Foresting Transforms. In this framework, one can choose the most suitable combination of ISF components for a given application - i.e., i) a seed sampling strategy, ii) a connectivity function, iii) an adjacency relation, and iv) a seed pixel recomputation procedure. The superpixels in ISF structurally correspond to spanning trees rooted at those seeds. We present five ISF-based methods to illustrate different choices for those components. These methods are compared with a number of state-of-the-art approaches with respect to effectiveness and efficiency. Experiments are carried out on several datasets containing 2D and 3D objects with distinct texture and shape properties, including a high-level application, named sky image segmentation. The theoretical properties of ISF are demonstrated in the supplementary material and the results show ISF-based methods rank consistently among the best for all datasets.", "title": "" }, { "docid": "fb1f3f300bcd48d99f0a553a709fdc89", "text": "This work includes a high step up voltage gain DC-DC converter for DC microgrid applications. The DC microgrid can be utilized for rural electrification, UPS support, Electronic lighting systems and Electrical vehicles. The whole system consists of a Photovoltaic panel (PV), High step up DC-DC converter with Maximum Power Point Tracking (MPPT) and DC microgrid. The entire system is optimized with both MPPT and converter separately. The MPP can be tracked by Incremental Conductance (IC) MPPT technique modified with D-Sweep (Duty ratio Sweep). D-sweep technique reduces the problem of multiple local maxima. Converter optimization includes a high step up DC-DC converter which comprises of both coupled inductor and switched capacitors. This increases the gain up to twenty times with high efficiency. Both converter optimization and MPPT optimization increases overall system efficiency. MATLAB/simulink model is implemented. Hardware of the system can be implemented by either voltage mode control or current mode control.", "title": "" }, { "docid": "6bee9f6c4a240cc53049f183d8079c62", "text": "This study aims to analyze the benefits of improved multiscale reasoning for object detection and localization with deep convolutional neural networks. To that end, an efficient and general object detection framework which operates on scale volumes of a deep feature pyramid is proposed. In contrast to the proposed approach, most current state-of-the-art object detectors operate on a single-scale in training, while testing involves independent evaluation across scales. One benefit of the proposed approach is in better capturing of multi-scale contextual information, resulting in significant gains in both detection performance and localization quality of objects on the PASCAL VOC dataset and a multi-view highway vehicles dataset. The joint detection and localization scale-specific models are shown to especially benefit detection of challenging object categories which exhibit large scale variation as well as detection of small objects.", "title": "" }, { "docid": "b009c2b4cc62f7cc430deb671de4a192", "text": "Electric vehicles are gaining importance and help to reduce dependency on oil, increase energy efficiency of transportation, reduce carbon emissions and noise, and avoid tail pipe emissions. Because of short driving distances, high mileages, and intermediate waiting times, fossil-fuelled taxi vehicles are ideal candidates for being replaced by battery electric vehicles (BEVs). Moreover, taxis as BEVs would increase visibility of electric mobility and therefore encourage others to purchase an electric vehicle. Prior to replacing conventional taxis with BEVs, a suitable charging infrastructure has to be established. This infrastructure, which is a prerequisite for the use of BEVs in practice, consists of a sufficiently dense network of charging stations taking into account the lower driving ranges of BEVs. In this case study we propose a decision support system for placing charging stations to satisfy the charging demand of electric taxi vehicles. Operational taxi data from about 800 vehicles is used to identify and estimate the charging demand for electric taxis based on frequent origins and destinations of trips. Next, a variant of the maximal covering location problem is formulated and solved, aiming at satisfying as much charging demand as possible with a limited number of charging stations. Already existing fast charging locations are considered in the optimization problem. In this work, we focus on finding regions in which charging stations should be placed, rather than exact locations. The exact location within an area is identified in a post-optimization phase (e.g., by authorities), where environmental conditions are considered, e.g., the capacity of the power network, availability of space, and legal issues. Our approach is implemented in the city of Vienna, Austria, in the course of an applied research project conducted in 2014. Local authorities, power network operators, representatives of taxi driver guilds as well as a radio taxi provider participated in the project and identified exact locations for charging stations based on our decision support system. ∗Corresponding author Email addresses: johannes.asamer@ait.ac.at (Johannes Asamer), martin.reinthaler@ait.ac.at (Martin Reinthaler), mario.ruthmair@univie.ac.at (Mario Ruthmair), markus.straub@ait.ac.at (Markus Straub), jakob.puchinger@centralesupelec.fr (Jakob Puchinger) Preprint submitted to Elsevier November 6, 2015", "title": "" }, { "docid": "9b08be9d250822850fda92819774248e", "text": "In recent years, recommendation systems have been widely used in various commercial platforms to provide recommendations for users. Collaborative filtering algorithms are one of the main algorithms used in recommendation systems. Such algorithms are simple and efficient; however, the sparsity of the data and the scalability of the method limit the performance of these algorithms, and it is difficult to further improve the quality of the recommendation results. Therefore, a model combining a collaborative filtering recommendation algorithm with deep learning technology is proposed, therein consisting of two parts. First, the model uses a feature representation method based on a quadric polynomial regression model, which obtains the latent features more accurately by improving upon the traditional matrix factorization algorithm. Then, these latent features are regarded as the input data of the deep neural network model, which is the second part of the proposed model and is used to predict the rating scores. Finally, by comparing with other recommendation algorithms on three public datasets, it is verified that the recommendation performance can be effectively improved by our model.", "title": "" }, { "docid": "2550502036aac5cf144cb8a0bc2d525b", "text": "Significant achievements have been made on the development of next-generation filtration and separation membranes using graphene materials, as graphene-based membranes can afford numerous novel mass-transport properties that are not possible in state-of-art commercial membranes, making them promising in areas such as membrane separation, water desalination, proton conductors, energy storage and conversion, etc. The latest developments on understanding mass transport through graphene-based membranes, including perfect graphene lattice, nanoporous graphene and graphene oxide membranes are reviewed here in relation to their potential applications. A summary and outlook is further provided on the opportunities and challenges in this arising field. The aspects discussed may enable researchers to better understand the mass-transport mechanism and to optimize the synthesis of graphene-based membranes toward large-scale production for a wide range of applications.", "title": "" }, { "docid": "af5e15777e3d7331ed8020de4af73f96", "text": "We present a virtual try-on system EON Interactive Mirror that employs one Kinect sensor and one High-Definition (HD) Camera. We first overview the major technical components for the complete virtual try-on system. We then elaborate on several key challenges such as calibration between the Kinect and HD cameras, and shoulder height estimation for individual subjects. Quality of these steps is the key to achieving seamless try-on experience for users. We also present performance comparison of our system implemented on top of two skeletal tracking SDKs: OpenNI and Kinect for Windows SDK (KWSDK). Lastly, we discuss our experience in deploying the system in retail stores and some potential future improvements.", "title": "" }, { "docid": "1f7f0b82bf5822ee51313edfd1cb1593", "text": "With the promise of meeting future capacity demands, 3-D massive-MIMO/full dimension multiple-input-multiple-output (FD-MIMO) systems have gained much interest in recent years. Apart from the huge spectral efficiency gain, 3-D massive-MIMO/FD-MIMO systems can also lead to significant reduction of latency, simplified multiple access layer, and robustness to interference. However, in order to completely extract the benefits of the system, accurate channel state information is critical. In this paper, a channel estimation method based on direction of arrival (DoA) estimation is presented for 3-D millimeter wave massive-MIMO orthogonal frequency division multiplexing (OFDM) systems. To be specific, the DoA is estimated using estimation of signal parameter via rotational invariance technique method, and the root mean square error of the DoA estimation is analytically characterized for the corresponding MIMO-OFDM system. An ergodic capacity analysis of the system in the presence of DoA estimation error is also conducted, and an optimum power allocation algorithm is derived. Furthermore, it is shown that the DoA-based channel estimation achieves a better performance than the traditional linear minimum mean squared error estimation in terms of ergodic throughput and minimum chordal distance between the subspaces of the downlink precoders obtained from the underlying channel and the estimated channel.", "title": "" }, { "docid": "a0b40209ee7655fcb08b080467d48915", "text": "This note describes a simplification of the GKR interactive proof for circuit evaluation (Goldwasser, Kalai, and Rothblum, J. ACM 2015), as efficiently instantiated by Cormode, Mitzenmacher, and Thaler (ITCS 2012). The simplification reduces the prover runtime, round complexity, and total communication cost of the protocol by roughly 33%.", "title": "" }, { "docid": "a1196d8624026339f66e843df68469d0", "text": "Two or more isoforms of several cytokines including tumor necrosis factors (tnfs) have been reported from teleost fish. Although zebrafish (Danio rerio) and medaka (Oryzias latipes) possess two tnf-α genes, their genomic location and existence are yet to be described and confirmed. Therefore, we conducted in silico identification, synteny analysis of tnf-α and tnf-n from both the fish with that of human TNF/lymphotoxin loci and their expression analysis in zebrafish. We identified two homologs of tnf-α (named as tnf-α1 and tnf-α2) and a tnf-n gene from zebrafish and medaka. Genomic location of these genes was found to be as: tnf-α1, and tnf-n and tnf-α2 genes on zebrafish chromosome 19 and 15 and medaka chromosome 11 and 16, respectively. Several features such as existence of TNF family signature, conservation of genes in TNF loci with human chromosome, phylogenetic clustering and amino acid similarity with other teleost TNFs confirmed their identity as tnf-α and tnf-n. There were a constitutive expression of all three genes in different tissues, and an increased expression of tnf-α1 and -α2 and a varied expression of tnf-n ligand in zebrafish head kidney cells induced with 20 μg mL(-1) LPS in vitro. Our results suggest the presence of two tnf-α homologs on different chromosomes of zebrafish and medaka and correlate this incidence arising from the fish whole genome duplication event.", "title": "" }, { "docid": "119696bc950e1c36fa9d09ee8c1aa6fb", "text": "A smart grid is an intelligent electricity grid that optimizes the generation, distribution and consumption of electricity through the introduction of Information and Communication Technologies on the electricity grid. In essence, smart grids bring profound changes in the information systems that drive them: new information flows coming from the electricity grid, new players such as decentralized producers of renewable energies, new uses such as electric vehicles and connected houses and new communicating equipments such as smart meters, sensors and remote control points. All this will cause a deluge of data that the energy companies will have to face. Big Data technologies offers suitable solutions for utilities, but the decision about which Big Data technology to use is critical. In this paper, we provide an overview of data management for smart grids, summarise the added value of Big Data technologies for this kind of data, and discuss the technical requirements, the tools and the main steps to implement Big Data solutions in the smart grid context.", "title": "" }, { "docid": "95410e1bfb8a5f42ff949d061b1cd4b9", "text": "This paper presents a high-level hand feature extraction method for real-time gesture recognition. Firstly, the fingers are modelled as cylindrical objects due to their parallel edge feature. Then a novel algorithm is proposed to directly extract fingers from salient hand edges. Considering the hand geometrical characteristics, the hand posture is segmented and described based on the finger positions, palm center location and wrist position. A weighted radial projection algorithm with the origin at the wrist position is applied to localize each finger. The developed system can not only extract extensional fingers but also flexional fingers with high accuracy. Furthermore, hand rotation and finger angle variation have no effect on the algorithm performance. The orientation of the gesture can be calculated without the aid of arm direction and it would not be disturbed by the bare arm area. Experiments have been performed to demonstrate that the proposed method can directly extract high-level hand feature and estimate hand poses in real-time. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "83ab7bbacc7b2a18faf580a2291b84ea", "text": "When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines) allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR Bsplines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.", "title": "" }, { "docid": "0b9b85dc4f80e087f591f89b12bb6146", "text": "Entity profiling (EP) as an important task of Web mining and information extraction (IE) is the process of extracting entities in question and their related information from given text resources. From computational viewpoint, the Farsi language is one of the less-studied and less-resourced languages, and suffers from the lack of high quality language processing tools. This problem emphasizes the necessity of developing Farsi text processing systems. As an element of EP research, we present a semantic approach to extract profile of person entities from Farsi Web documents. Our approach includes three major components: (i) pre-processing, (ii) semantic analysis and (iii) attribute extraction. First, our system takes as input the raw text, and annotates the text using existing pre-processing tools. In semantic analysis stage, we analyze the pre-processed text syntactically and semantically and enrich the local processed information with semantic information obtained from a distant knowledge base. We then use a semantic rule-based approach to extract the related information of the persons in question. We show the effectiveness of our approach by testing it on a small Farsi corpus. The experimental results are encouraging and show that the proposed method outperforms baseline methods.", "title": "" } ]
scidocsrr
40ef1f19549f9521acba56c5785a4553
Social Network Extraction of Academic Researchers
[ { "docid": "7ff084619d05d21975ff41748a260418", "text": "In the development of speech recognition algorithms, it is important to know whether any apparent difference in performance of algorithms is statistically significant, yet this issue is almost always overlooked. We present two simple tests for deciding whether the difference in error-rates between two algorithms tested on the same data set is statistically significant. The first (McNemar’s test) requires the errors made by an algorithm to be independent events and is most appropriate for isolated word algorithms. The second (a matched-pairs test) can be used even when errors are not independent events and is more appropriate for connected speech.", "title": "" }, { "docid": "05fcab9232eadffb6e8de94a88c1cec1", "text": "Unsupervised clustering can be significantly improved using supervision in the form of pairwise constraints, i.e., pairs of instances labeled as belonging to same or different clusters. In recent years, a number of algorithms have been proposed for enhancing clustering quality by employing such supervision. Such methods use the constraints to either modify the objective function, or to learn the distance measure. We propose a probabilistic model for semi-supervised clustering based on Hidden Markov Random Fields (HMRFs) that provides a principled framework for incorporating supervision into prototype-based clustering. The model generalizes a previous approach that combines constraints and Euclidean distance learning, and allows the use of a broad range of clustering distortion measures, including Bregman divergences (e.g., Euclidean distance and I-divergence) and directional similarity measures (e.g., cosine similarity). We present an algorithm that performs partitional semi-supervised clustering of data by minimizing an objective function derived from the posterior energy of the HMRF model. Experimental results on several text data sets demonstrate the advantages of the proposed framework.", "title": "" } ]
[ { "docid": "7b3dd8bdc75bf99f358ef58b2d56e570", "text": "This paper studies asset allocation decisions in the presence of regime switching in asset returns. We find evidence that four separate regimes characterized as crash, slow growth, bull and recovery states are required to capture the joint distribution of stock and bond returns. Optimal asset allocations vary considerably across these states and change over time as investors revise their estimates of the state probabilities. In the crash state, buy-and-hold investors allocate more of their portfolio to stocks the longer their investment horizon, while the optimal allocation to stocks declines as a function of the investment horizon in bull markets. The joint effects of learning about state probabilities and predictability of asset returns from the dividend yield give rise to a non-monotonic relationship between the investment horizon and the demand for stocks. Welfare costs from ignoring regime switching can be substantial even after accounting for parameter uncertainty. Out-of-sample forecasting experiments confirm the economic importance of accounting for the presence of regimes in asset returns.", "title": "" }, { "docid": "5c85263f109a57662134607f2d50b095", "text": "Reducing Employee Turnover in Retail Environments: An Analysis of Servant Leadership Variables by Beatriz Rodriguez MBA, Webster University, 1994 BBA, University of Puerto Rico, 1989 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Business Administration Walden University August 2016 Abstract In a competitive retail environment, retail store managers (RSMs) need to retain retail customer service employees (RCSE) to maximize sales and reduce employee turnover costs. Servant leadership (SL) is a preferred leadership style within customer service organizations; however, there is disagreement regarding the usefulness of SL in the retail industry. The theoretical framework for this correlational study is Greenleaf’s SL theory. Seventy-four of 109 contacted human resources managers (HRMs) from a Fortune 500 United States retailer, with responsibility for evaluating leadership competencies of the RSMs they support, completed Liden’s Servant Leadership Questionnaire. RCSE turnover rates were available from company records. To analyze the correlation between the 3 SL constructs and RCSE turnover, multiple regression analysis with Pearson’s r providing sample correlation coefficients were used. Individually the 3 constructs FIRST (beta = .083, p = .692), EMPOWER (beta = -.076, p = .685), and GROW (beta = -.018, p = .917) were not statistically significant to predict RCSE turnover. The study multiple regression model with F (3,74) = .071, p = .98, R2 = .003 failed to demonstrate a significant correlation between SL constructs and turnover. Considering these findings, the HRMs could hire or train for different leadership skills that may be more applicable to effectively lead a retail sales force. In doing so, the implications for positive social change may result in RCSE retention leading to economic stability and career growth.In a competitive retail environment, retail store managers (RSMs) need to retain retail customer service employees (RCSE) to maximize sales and reduce employee turnover costs. Servant leadership (SL) is a preferred leadership style within customer service organizations; however, there is disagreement regarding the usefulness of SL in the retail industry. The theoretical framework for this correlational study is Greenleaf’s SL theory. Seventy-four of 109 contacted human resources managers (HRMs) from a Fortune 500 United States retailer, with responsibility for evaluating leadership competencies of the RSMs they support, completed Liden’s Servant Leadership Questionnaire. RCSE turnover rates were available from company records. To analyze the correlation between the 3 SL constructs and RCSE turnover, multiple regression analysis with Pearson’s r providing sample correlation coefficients were used. Individually the 3 constructs FIRST (beta = .083, p = .692), EMPOWER (beta = -.076, p = .685), and GROW (beta = -.018, p = .917) were not statistically significant to predict RCSE turnover. The study multiple regression model with F (3,74) = .071, p = .98, R2 = .003 failed to demonstrate a significant correlation between SL constructs and turnover. Considering these findings, the HRMs could hire or train for different leadership skills that may be more applicable to effectively lead a retail sales force. In doing so, the implications for positive social change may result in RCSE retention leading to economic stability and career growth. Reducing Employee Turnover in Retail Environments: An Analysis of Servant Leadership Variables by Beatriz Rodriguez MBA, Webster University, 1994 BBA, University of Puerto Rico, 1989 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Business Administration Walden University August 2016 Dedication I dedicate this to my three sons: Javi, J.J., and Javier. You inspire me to reach my goals, hoping that I can set you on the path to reach yours. May this work be an example that with hard work and perseverance, we can accomplish anything. Acknowledgments I would like to extend a heartfelt thanks to my husband, Alfredo, and my friend, Elaine. Their continuous encouragement, help, and support provided the fuel that helped me reach this point in my academic career. I am lucky to have such supportive individuals in my life. Along with my unwavering parents, they believed in my ability to succeed. Lastly, thank you to my Chair, Dr. John Hannon, Co-Chair, Dr. Perry Haan and URR committee member Dr. Lyn Szostek. Their dedication to Walden University and the students is commendable.", "title": "" }, { "docid": "1b5bb38b0a451238b2fc98a39d6766b0", "text": "OBJECTIVES\nWe quantified concomitant medication polypharmacy, pharmacokinetic and pharmacodynamic interactions, adverse effects and adherence in Australian adults on effective antiretroviral therapy.\n\n\nDESIGN\nCross-sectional.\n\n\nMETHODS\nPatients recruited into a nationwide cohort and assessed for prevalence and type of concomitant medication (including polypharmacy, defined as ≥5 concomitant medications), pharmacokinetic or pharmacodynamic interactions, potential concomitant medication adverse effects and concomitant medication adherence. Factors associated with concomitant medication polypharmacy and with imperfect adherence were identified using multivariable logistic regression.\n\n\nRESULTS\nOf 522 participants, 392 (75%) took a concomitant medication (mostly cardiovascular, nonprescription or antidepressant). Overall, 280 participants (54%) had polypharmacy of concomitant medications and/or a drug interaction or contraindication. Polypharmacy was present in 122 (23%) and independently associated with clinical trial participation, renal impairment, major comorbidity, hospital/general practice-based HIV care (versus sexual health clinic) and benzodiazepine use. Seventeen participants (3%) took at least one concomitant medication contraindicated with their antiretroviral therapy, and 237 (45%) had at least one pharmacokinetic/pharmacodynamic interaction. Concomitant medication use was significantly associated with sleep disturbance and myalgia, and polypharmacy of concomitant medications with diarrhoea, fatigue, myalgia and peripheral neuropathy. Sixty participants (12%) reported imperfect concomitant medication adherence, independently associated with requiring financial support, foregoing necessities for financial reasons, good/very good self-reported general health and at least 1 bed day for illness in the previous 12 months.\n\n\nCONCLUSION\nIn a resource-rich setting with universal healthcare access, the majority of this sample took a concomitant medication. Over half had at least one of concomitant medication polypharmacy, pharmacokinetic or pharmacodynamic interaction. Concomitant medication use was associated with several adverse clinical outcomes.", "title": "" }, { "docid": "b4c5ddab0cb3e850273275843d1f264f", "text": "The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.", "title": "" }, { "docid": "c1ba049befffa94e358555056df15cc2", "text": "People design what they say specifically for their conversational partners, and they adapt to their partners over the course of a conversation. A comparison of keyboard conversations involving a simulated computer partner (as in a natural language interface) with those involving a human partner (as in teleconferencing) yielded striking differences and some equally striking similarities. For instance, there were significantly fewer acknowledgments in human/computer dialogue than in human/human. However, regardless of the conversational partner, people expected connectedness across conversational turns. In addition, the style of a partner's response shaped what people subsequently typed. These results suggest some issues that need to be addressed before a natural language computer interface will be able to hold up its end of a conversation.", "title": "" }, { "docid": "d2a7b5cfb1a20ba5f66687326a8e1a3d", "text": "This paper proposes the efficient Frame Rate Up-conversion (FRUC) that has low computational complexity. The proposed algorithm consists of motion vector (MV) smoothing, selective average based motion compensation (SAMC) and hole interpolation with different weights. The proposed MV smoothing constructs more smooth interpolated frames by correcting inaccurate MVs. The proposed SAMC and hole interpolation effectively deal with overlaps and holes, and thus, they can efficiently reduce the degradation of the interpolated frames by removing blocking artifacts and blurring. Experimental results show that the proposed algorithm improves the average PSNR of the interpolated frames by 4.15dB than the conventional algorithm using bilateral ME, while it shows the average 0.16dB less PSNR performance than the existing algorithm using unilateral ME. However, it can significantly reduce the computational complexity based on absolute difference by 89.3%.", "title": "" }, { "docid": "07cd406cead1a086f61f363269de1aac", "text": "Tolerating and recovering from link and switch failures are fundamental requirements of most networks, including Software-Defined Networks (SDNs). However, instead of traditional behaviors such as network-wide routing re-convergence, failure recovery in an SDN is determined by the specific software logic running at the controller. While this admits more freedom to respond to a failure event, it ultimately means that each controller application must include its own recovery logic, which makes the code more difficult to write and potentially more error-prone.\n In this paper, we propose a runtime system that automates failure recovery and enables network developers to write simpler, failure-agnostic code. To this end, upon detecting a failure, our approach first spawns a new controller instance that runs in an emulated environment consisting of the network topology excluding the failed elements. Then, it quickly replays inputs observed by the controller before the failure occurred, leading the emulated network into the forwarding state that accounts for the failed elements. Finally, it recovers the network by installing the difference ruleset between emulated and current forwarding states.", "title": "" }, { "docid": "a712b6efb5c869619864cd817c2e27e1", "text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.", "title": "" }, { "docid": "64efd590a51fc3cab97c9b4b17ba9b40", "text": "The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amount of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper, we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text. Another contribution that we make is proposing a technique based on synthetic minority oversampling to generate a large labeled dataset, suitable for deep nets training, from a minimal amount of labeled data (roughly 3,000 examples of sophisticated Twitter bots). We demonstrate that, from just one single tweet, our architecture can achieve high classification accuracy (AUC > 96%) in separating bots from humans. We apply the same architecture to account-level bot detection, achieving nearly perfect classification accuracy (AUC > 99%). Our system outperforms previous state of the art while leveraging a small and interpretable set of features yet requiring minimal training data.", "title": "" }, { "docid": "dca2900c2b002e3119435bcf983c5aac", "text": "Substantial evidence suggests that the accumulation of beta-amyloid (Abeta)-derived peptides contributes to the aetiology of Alzheimer's disease (AD) by stimulating formation of free radicals. Thus, the antioxidant alpha-lipoate, which is able to cross the blood-brain barrier, would seem an ideal substance in the treatment of AD. We have investigated the potential effectiveness of alpha-lipoic acid (LA) against cytotoxicity induced by Abeta peptide (31-35) (30 microM) and hydrogen peroxide (H(2)O(2)) (100 microM) with the cellular 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) reduction and fluorescence dye propidium iodide assays in primary neurons of rat cerebral cortex. We found that treatment with LA protected cortical neurons against cytotoxicity induced by Abeta or H(2)O(2). In addition, LA-induced increase in the level of Akt in the neurons was observed by Western blot. The LA-induced neuroprotection and Akt increase were attenuated by pre-treatment with the phosphatidylinositol 3-kinase inhibitor, LY294002 (50 microM). Our data suggest that the neuroprotective effects of the antioxidant LA are partly mediated through activation of the PKB/Akt signaling pathway.", "title": "" }, { "docid": "f49923e0f36a47162ec087c661169459", "text": "People use imitation to encourage each other during conversation. We have conducted an experiment to investigate how imitation by a robot affect people’s perceptions of their conversation with it. The robot operated in one of three ways: full head gesture mimicking, partial head gesture mimicking (nodding), and non-mimicking (blinking). Participants rated how satisfied they were with the interaction. We hypothesized that participants in the full head gesture condition will rate their interaction the most positively, followed by the partial and non-mimicking conditions. We also performed gesture analysis to see if any differences existed between groups, and did find that men made significantly more gestures than women while interacting with the robot. Finally, we interviewed participants to try to ascertain additional insight into their feelings of rapport with the robot, which revealed a number of valuable insights.", "title": "" }, { "docid": "805d0578891511d3e3dab1309edded8f", "text": "We propose to learn a curriculum or a syllabus for deep reinforcement learning and supervised learning with deep neural networks by an attachable deep neural network, called ScreenerNet. Specifically, we learn a weight for each sample by jointly training the ScreenerNet and the main network in an end-to-end selfpaced fashion. The ScreenerNet has neither sampling bias nor memory for the past learning history. We show the networks augmented with the ScreenerNet converge faster with better accuracy than the state-of-the-art curricular learning methods in extensive experiments of a Cart-pole task using Deep Q-learning and supervised visual recognition task using three vision datasets such as Pascal VOC2012, CIFAR10, and MNIST. Moreover, the ScreenerNet can be combined with other curriculum learning methods such as Prioritized Experience Replay (PER) for further accuracy improvement.", "title": "" }, { "docid": "ec641ace6df07156891f2bf40ea5d072", "text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.", "title": "" }, { "docid": "40e7ea2295994e1b822b3e4ab968d9f9", "text": "This paper presents the use of a new meta-heuristic technique namely gray wolf optimizer (GWO) which is inspired from gray wolves’ leadership and hunting behaviors to solve optimal reactive power dispatch (ORPD) problem. ORPD problem is a well-known nonlinear optimization problem in power system. GWO is utilized to find the best combination of control variables such as generator voltages, tap changing transformers’ ratios as well as the amount of reactive compensation devices so that the loss and voltage deviation minimizations can be achieved. In this paper, two case studies of IEEE 30bus system and IEEE 118-bus system are used to show the effectiveness of GWO technique compared to other techniques available in literature. The results of this research show that GWO is able to achieve less power loss and voltage deviation than those determined by other techniques.", "title": "" }, { "docid": "a45d3d3d5f3d7716371c792d01e4cee8", "text": "Virtual personal assistants (VPA) (e.g., Amazon Alexa and Google Assistant) today mostly rely on the voice channel to communicate with their users, which however is known to be vulnerable, lacking proper authentication. The rapid growth of VPA skill markets opens a new attack avenue, potentially allowing a remote adversary to publish attack skills to attack a large number of VPA users through popular IoT devices such as Amazon Echo and Google Home. In this paper, we report a study that concludes such remote, large-scale attacks are indeed realistic. More specifically, we implemented two new attacks: voice squatting in which the adversary exploits the way a skill is invoked (e.g., “open capital one”), using a malicious skill with similarly pronounced name (e.g., “capital won”) or paraphrased name (e.g., “capital one please”) to hijack the voice command meant for a different skill, and voice masquerading in which a malicious skill impersonates the VPA service or a legitimate skill to steal the user’s data or eavesdrop on her conversations. These attacks aim at the way VPAs work or the user’s misconceptions about their functionalities, and are found to pose a realistic threat by our experiments (including user studies and real-world deployments) on Amazon Echo and Google Home. The significance of our findings have already been acknowledged by Amazon and Google, and further evidenced by the risky skills discovered on Alexa and Google markets by the new detection systems we built. We further developed techniques for automatic detection of these attacks, which already capture real-world skills likely to pose such threats. ∗All the squatting and impersonation vulnerabilities we discovered are reported to Amazon and Google and received their acknowledgement [7].", "title": "" }, { "docid": "d6abc85e62c28755ed6118257d9c25c3", "text": "MOTIVATION\nIn a previous paper, we presented a polynomial time dynamic programming algorithm for predicting optimal RNA secondary structure including pseudoknots. However, a formal grammatical representation for RNA secondary structure with pseudoknots was still lacking.\n\n\nRESULTS\nHere we show a one-to-one correspondence between that algorithm and a formal transformational grammar. This grammar class encompasses the context-free grammars and goes beyond to generate pseudoknotted structures. The pseudoknot grammar avoids the use of general context-sensitive rules by introducing a small number of auxiliary symbols used to reorder the strings generated by an otherwise context-free grammar. This formal representation of the residue correlations in RNA structure is important because it means we can build full probabilistic models of RNA secondary structure, including pseudoknots, and use them to optimally parse sequences in polynomial time.", "title": "" }, { "docid": "b53bd3f4a0d8933d9af0f5651a445800", "text": "Requirements for implemented system can be extracted and reused for a production of a new similar system. Extraction of common and variable features from requirements leverages the benefits of the software product lines engineering (SPLE). Although various approaches have been proposed in feature extractions from natural language (NL) requirements, no related literature review has been published to date for this topic. This paper provides a systematic literature review (SLR) of the state-of-the-art approaches in feature extractions from NL requirements for reuse in SPLE. We have included 13 studies in our synthesis of evidence and the results showed that hybrid natural language processing approaches were found to be in common for overall feature extraction process. A mixture of automated and semi-automated feature clustering approaches from data mining and information retrieval were also used to group common features, with only some approaches coming with support tools. However, most of the support tools proposed in the selected studies were not made available publicly and thus making it hard for practitioners’ adoption. As for the evaluation, this SLR reveals that not all studies employed software metrics as ways to validate experiments and case studies. Finally, the quality assessment conducted confirms that practitioners’ guidelines were absent in the selected studies. © 2015 Elsevier Inc. All rights reserved. c t t t r c S o r ( l w t r t", "title": "" }, { "docid": "b71b9a6990866c89ab7bc65338f61a9d", "text": "This paper compares advantages and disadvantages of several commonly used current sensing methods such as dedicated sense resistor sensing, MOSFET Rds(on) current sensing, and inductor DC resistance (DCR) current sensing. Among these current sense methods, inductor DCR current sense that shows more advantages over other current sensing methods is chosen for analysis. The time constants mismatch issue between the time constant made by the current sensing RC network and the one formed with output inductor and its DC resistance is addressed in this paper. And an unified small signal modeling of a buck converter using inductor DCR current sensing with matched and mismatched time constants is presented, and the modeling has been verified experimentally.", "title": "" }, { "docid": "02bd18358ac5cb5539a99d4c2babd2ea", "text": "This tutorial provides an overview of the key research results in the area of entity resolution that are relevant to addressing the new challenges in entity resolution posed by the Web of data, in which real world entities are described by interlinked data rather than documents. Since such descriptions are usually partial, overlapping and sometimes evolving, entity resolution emerges as a central problem both to increase dataset linking but also to search the Web of data for entities and their relations.", "title": "" }, { "docid": "114211e5f3dd526a8b78c1dcba98e9f1", "text": "We reviewed 75 primary total hip arthroplasty preoperative and postoperative radiographs and recorded limb length discrepancy, change in femoral offset, acetabular position, neck cut, and femoral component positioning. Interobturator line, as a technique to measure preoperative limb length discrepancy, had the least amount of variance when compared with interteardrop and intertuberosity lines (Levene test, P = .0527). The most common error in execution of preoperative templating was excessive limb lengthening (mean, 3.52 mm), primarily due to inferior acetabular cup positioning (Pearson correlation coefficient, P = .036). Incomplete medialization of the acetabular component contributed the most to offset discrepancy. The most common errors in the execution of preoperative templating resulted in excessive limb lengthening and increased offset. Identifying these errors can lead to more accurate templating techniques and improved intraoperative execution.", "title": "" } ]
scidocsrr
195508b43f3be5174c3d74d6b83f036f
Crowdfunding : Why People Are Motivated to Post and Fund Projects on Crowdfunding Platforms
[ { "docid": "27c2c015c6daaac99b34d00845ec646c", "text": "Virtual worlds, such as Second Life and Everquest, have grown into virtual game communities that have economic potential. In such communities, virtual items are bought and sold between individuals for real money. The study detailed in this paper aims to identify, model and test the individual determinants for the decision to purchase virtual items within virtual game communities. A comprehensive understanding of these key determinants will enable researchers to further the understanding of player behavior towards virtual item transactions, which are an important aspect of the economic system within virtual games and often raise one of the biggest challenges for game community operators. A model will be developed via a mixture of new constructs and established theories, including the theory of planned behavior (TPB), the technology acceptance model (TAM), trust theory and unified theory of acceptance and use of technology (UTAUT). For this purpose the research uses a sequential, multi-method approach in two phases: combining the use of inductive, qualitative data from focus groups and expert interviews in phase one; and deductive, quantitative survey data in phase two. The final model will hopefully provide an impetus to further research in the area of virtual game community transaction behavior. The paper rounds off with a discussion of further research challenges in this area over the next seven years.", "title": "" } ]
[ { "docid": "55dd9bf3372b1ae383d43664d60e9da8", "text": "In this report, we consider the task of automated assessment of English as a Second Language (ESOL) examination scripts written in response to prompts eliciting free text answers. We review and critically evaluate previous work on automated assessment for essays, especially when applied to ESOL text. We formally define the task as discriminative preference ranking and develop a new system trained and tested on a corpus of manually-graded scripts. We show experimentally that our best performing system is very close to the upper bound for the task, as defined by the agreement between human examiners on the same corpus. Finally we argue that our approach, unlike extant solutions, is relatively prompt-insensitive and resistant to subversion, even when its operating principles are in the public domain. These properties make our approach significantly more viable for high-stakes assessment.", "title": "" }, { "docid": "cce36b208b8266ddacc8baea18cd994b", "text": "Shape from shading is known to be an ill-posed problem. We show in this paper that if we model the problem in a different way than it is usually done, more precisely by taking into account the 1/r/sup 2/ attenuation term of the illumination, shape from shading becomes completely well-posed. Thus the shading allows to recover (almost) any surface from only one image (of this surface) without any additional data (in particular, without the knowledge of the heights of the solution at the local intensity \"minima\", contrary to [P. Dupuis et al. (1994), E. Prados et al. (2004), B. Horn (1986), E. Rouy et al. (1992), R. Kimmel et al. (2001)]) and without regularity assumptions (contrary to [J. Oliensis et al. (1993), R. Kimmel et al. (1995)], for example). More precisely, we formulate the problem as that of solving a new partial differential equation (PDE), we develop a complete mathematical study of this equation and we design a new provably convergent numerical method. Finally, we present results of our new shape from shading method on various synthetic and real images.", "title": "" }, { "docid": "688ee7a4bde400a6afbd6972d729fad4", "text": "Learning-to-Rank ( LtR ) techniques leverage machine learning algorithms and large amounts of training data to induce high-quality ranking functions. Given a set of documents and a user query, these functions are able to precisely predict a score for each of the documents, in turn exploited to effectively rank them. Although the scoring efficiency of LtR models is critical in several applications – e.g., it directly impacts on response time and throughput of Web query processing – it has received relatively little attention so far. The goal of this work is to experimentally investigate the scoring efficiency of LtR models along with their ranking quality. Specifically, we show that machine-learned ranking models exhibit a quality versus efficiency trade-off. For example, each family of LtR algorithms has tuning parameters that can influence both effectiveness and efficiency, where higher ranking quality is generally obtained with more complex and expensive models. Moreover, LtR algorithms that learn complex models, such as those based on forests of regression trees, are generally more expensive and more effective than other algorithms that induce simpler models like linear combination of features. We extensively analyze the quality versus efficiency trade-off of a wide spectrum of stateof-the-art LtR , and we propose a sound methodology to devise the most effective ranker given a time budget. To guarantee reproducibility, we used publicly available datasets and we contribute an open source C++ framework providing optimized, multi-threaded implementations of the most effective tree-based learners: Gradient Boosted Regression Trees ( GBRT ), Lambda-Mart ( λ-MART ), and the first public-domain implementation of Oblivious Lambda-Mart ( λ-MART ), an algorithm that induces forests of oblivious regression trees. We investigate how the different training parameters impact on the quality versus efficiency trade-off, and provide a thorough comparison of several algorithms in the qualitycost space. The experiments conducted show that there is not an overall best algorithm, but the optimal choice depends on the time budget. © 2016 Elsevier Ltd. All rights reserved. ∗ Corresponding author. E-mail addresses: gabriele.capannini@mdh.se (G. Capannini), claudio.lucchese@isti.cnr.it , c.lucchese@isti.cnr.it (C. Lucchese), f.nardini@isti.cnr.it (F.M. Nardini), orlando@unive.it (S. Orlando), r.perego@isti.cnr.it (R. Perego), n.tonellotto@isti.cnr.it (N. Tonellotto). http://dx.doi.org/10.1016/j.ipm.2016.05.004 0306-4573/© 2016 Elsevier Ltd. All rights reserved. Please cite this article as: G. Capannini et al., Quality versus efficiency in document scoring with learning-to-rank models, Information Processing and Management (2016), http://dx.doi.org/10.1016/j.ipm.2016.05.004 2 G. Capannini et al. / Information Processing and Management 0 0 0 (2016) 1–17 ARTICLE IN PRESS JID: IPM [m3Gsc; May 17, 2016;19:28 ] Document Index Base Ranker Top Ranker Features Learning to Rank Algorithm Query First step Second step N docs K docs 1. ............ 2. ............ 3. ............ K. ............ ... ... Results Page(s) Fig. 1. The architecture of a generic machine-learned ranking pipeline.", "title": "" }, { "docid": "9d461f2c62d377e4614a543a6fa532fc", "text": "We consider a natural family of motion planning problems with movable obstacles and obtain hardness results for them. Some members of the family are shown to be PSPACE-complete thus improving and extending (and also simplifying) a previous NP-hardness result of Wilfong. The family considered includes a motion planning problem which forms the basis of a popular computer game called SOKOBAN. The decision problem corresponding to SOKOBAN is shown to be NP-hard. The motion planning problems considered are related to the \\warehouseman's problem\" considered by Hopcroft, Schwartz and Sharir, and to geometric versions of the motion planning problem on graphs considered", "title": "" }, { "docid": "518a9ed23b2989c131fa46b740ab26a6", "text": "The idea is to identify security-critical software bugs so they can be fixed first.", "title": "" }, { "docid": "23e32a61107fe286e432d5f2ecda7bad", "text": "How do we scale information extraction to the massive size and unprecedented heterogeneity of the Web corpus? Beginning in 2003, our KnowItAll project has sought to extract high-quality knowledge from the Web. In 2007, we introduced the Open Information Extraction (Open IE) paradigm which eschews handlabeled training examples, and avoids domainspecific verbs and nouns, to develop unlexicalized, domain-independent extractors that scale to the Web corpus. Open IE systems have extracted billions of assertions as the basis for both commonsense knowledge and novel question-answering systems. This paper describes the second generation of Open IE systems, which rely on a novel model of how relations and their arguments are expressed in English sentences to double precision/recall compared with previous systems such as TEXTRUNNER and WOE.", "title": "" }, { "docid": "d9a87325efbd29520c37ec46531c6062", "text": "Predicting the risk of potential diseases from Electronic Health Records (EHR) has attracted considerable attention in recent years, especially with the development of deep learning techniques. Compared with traditional machine learning models, deep learning based approaches achieve superior performance on risk prediction task. However, none of existing work explicitly takes prior medical knowledge (such as the relationships between diseases and corresponding risk factors) into account. In medical domain, knowledge is usually represented by discrete and arbitrary rules. Thus, how to integrate such medical rules into existing risk prediction models to improve the performance is a challenge. To tackle this challenge, we propose a novel and general framework called PRIME for risk prediction task, which can successfully incorporate discrete prior medical knowledge into all of the state-of-the-art predictive models using posterior regularization technique. Different from traditional posterior regularization, we do not need to manually set a bound for each piece of prior medical knowledge when modeling desired distribution of the target disease on patients. Moreover, the proposed PRIME can automatically learn the importance of different prior knowledge with a log-linear model.Experimental results on three real medical datasets demonstrate the effectiveness of the proposed framework for the task of risk prediction", "title": "" }, { "docid": "89a881c7cf9f6008d6d5ae6d2b55e919", "text": "Commercial software development teams have limited time available to focus on improvements to their software. These teams need a way to quickly identify areas of the source code that would benefit from improvement, as well as quantifiable data to defend the selected improvements to management. Past research has shown that mining configuration management systems for change information can be useful in determining faulty areas of the code. We present a tool named Code Hot Spot, which mines change records out of Microsoft's TFS configuration management system and creates a report of hot spots. Hot spots are contiguous areas of the code that have higher values of metrics that are indicators of faulty code. We present a study where we use this tool to study projects at ABB to determine areas that need improvement. The resulting data have been used to prioritize areas for additional code reviews and unit testing, as well as identifying change prone areas in need of refactoring.", "title": "" }, { "docid": "e6ff00b275f28864fb98af7f9643beca", "text": "Although the distributed file system is a widely used technology in local area networks, it has seen less use on the wide area networks that connect clusters, clouds, and grids. One reason for this is access control: existing file system technologies require either the client machine to be fully trusted, or the client process to hold a high value user credential, neither of which is practical in large scale systems. To address this problem, we have designed a system for fine-grained access control which dramatically reduces the amount of trust required of a batch job accessing a distributed file system. We have implemented this system in the context of the Chirp user-level distributed file system used in clusters, clouds, and grids, but the concepts can be applied to almost any other storage system. The system is evaluated to show that performance and scalability are similar to other authentication methods. The paper concludes with a discussion of integrating the authentication system into workflow systems.", "title": "" }, { "docid": "5ceb6e39c8f826c0a7fd0e5086090a5f", "text": "Mobile botnet phenomenon is gaining popularity among malware writers in order to exploit vulnerabilities in smartphones. In particular, mobile botnets enable illegal access to a victim’s smartphone, can compromise critical user data and launch a DDoS attack through Command and Control (C&C). In this article, we propose a static analysis approach, DeDroid, to investigate botnet-specific properties that can be used to detect mobile applications with botnet intensions. Initially, we identify critical features by observing code behavior of the few known malware binaries having C&C features. Then, we compare the identified features with the malicious and benign applications of Drebin dataset. The results show against the comparative analysis that, Drebin dataset has 35% malicious applications which qualify as botnets. Upon closer examination, 90% of the potential botnets are confirmed as botnets. Similarly, for comparative analysis against benign applications having C&C features, DeDroid has achieved adequate detection accuracy. In addition, DeDroid has achieved high accuracy with negligible false positive rate while making decision for state-of-the-art malicious applications.", "title": "" }, { "docid": "23208f44270f69c4de1640bb1c865a73", "text": "In order to provide a wide variety of mobile services and applications, the fifth-generation (5G) mobile communication system has attracted much attention to improve system capacity much more than the 4G system. The drastic improvement is mainly realized by small/semi-macro cell deployment with much wider bandwidth in higher frequency bands. To cope with larger pathloss in the higher frequency bands, Massive MIMO is one of key technologies to acquire beamforming (BF) in addition to spatial multiplexing. This paper introduces 5G Massive MIMO technologies including high-performance hybrid BF and novel digital BF schemes in addition to distributed Massive MIMO concept with flexible antenna deployment. The latest 5G experimental trials using the Massive MIMO technologies are also shown briefly.", "title": "" }, { "docid": "08b8c184ff2230b0df2c0f9b4e3f7840", "text": "We present an augmented reality magic mirror for teaching anatomy. The system uses a depth camera to track the pose of a user standing in front of a large display. A volume visualization of a CT dataset is augmented onto the user, creating the illusion that the user can look into his body. Using gestures, different slices from the CT and a photographic dataset can be selected for visualization. In addition, the system can show 3D models of organs, text information and images about anatomy. For interaction with this data we present a new interaction metaphor that makes use of the depth camera. The visibility of hands and body is modified based on the distance to a virtual interaction plane. This helps the user to understand the spatial relations between his body and the virtual interaction plane.", "title": "" }, { "docid": "ff51471f40278617cc35ca2d37215e29", "text": "This paper presents a novel design method to reduce cogging torque of interior type permanent magnet (IPM) motor. In the design method, the optimal notches are put on the rotor pole face, which have an effect on variation of permanent magnet (PM) shape or residual flux density of PM. Through the space harmonics field analysis, the positions of notches are found and the optimal shapes of notches are determined by using finite element method (FEM). The validity of the proposed method is confirmed by experiments.", "title": "" }, { "docid": "b9dcc111261fa97e2d36b9a536a5861d", "text": "We present the first open-source tool for annotating morphosyntactic tense, mood and voice for English, French and German verbal complexes. The annotation is based on a set of language-specific rules, which are applied on dependency trees and leverage information about lemmas, morphological properties and POS-tags of the verbs. Our tool has an average accuracy of about 76%. The tense, mood and voice features are useful both as features in computational modeling and for corpuslinguistic research.", "title": "" }, { "docid": "8074d30cb422922bc134d07547932685", "text": "Research paper recommenders emerged over the last decade to ease finding publications relating to researchers' area of interest. The challenge was not just to provide researchers with very rich publications at any time, any place and in any form but to also offer the right publication to the right researcher in the right way. Several approaches exist in handling paper recommender systems. However, these approaches assumed the availability of the whole contents of the recommending papers to be freely accessible, which is not always true due to factors such as copyright restrictions. This paper presents a collaborative approach for research paper recommender system. By leveraging the advantages of collaborative filtering approach, we utilize the publicly available contextual metadata to infer the hidden associations that exist between research papers in order to personalize recommendations. The novelty of our proposed approach is that it provides personalized recommendations regardless of the research field and regardless of the user's expertise. Using a publicly available dataset, our proposed approach has recorded a significant improvement over other baseline methods in measuring both the overall performance and the ability to return relevant and useful publications at the top of the recommendation list.", "title": "" }, { "docid": "695280a99e19a2b6ec024aa8adb0a563", "text": "There is often a wealth of extant domain-specific, natural-language data available to help guide developers of object-oriented systems. This data is generally amenable to natural language processing i n order to derive valuable design information. Howeve r, from a review of the field, we contend that current approaches have not been able to extract all the semantic and design detail that is present in such data. For instance: we see a lack of dynamic system representations; we detect reluctance for researche rs to adopt hybrid solutions where users confirm and elaborate automated analyses; and we suggest there is work needed to determine a comprehensive repertoire of potential relationships between system component s. In partial pursuit of such a manifesto, this paper mentions briefly our algorithmic proposal for processing natural language into UML with supplemental user involvement.", "title": "" }, { "docid": "d44b15b2e8bbc198030746a46c47e00c", "text": "Recent advances in far-field optical nanoscopy have enabled fluorescence imaging with a spatial resolution of 20 to 50 nanometers. Multicolor super-resolution imaging, however, remains a challenging task. Here, we introduce a family of photo-switchable fluorescent probes and demonstrate multicolor stochastic optical reconstruction microscopy (STORM). Each probe consists of a photo-switchable \"reporter\" fluorophore that can be cycled between fluorescent and dark states, and an \"activator\" that facilitates photo-activation of the reporter. Combinatorial pairing of reporters and activators allows the creation of probes with many distinct colors. Iterative, color-specific activation of sparse subsets of these probes allows their localization with nanometer accuracy, enabling the construction of a super-resolution STORM image. Using this approach, we demonstrate multicolor imaging of DNA model samples and mammalian cells with 20- to 30-nanometer resolution. This technique will facilitate direct visualization of molecular interactions at the nanometer scale.", "title": "" }, { "docid": "70fafdedd05a40db5af1eabdf07d431c", "text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.", "title": "" }, { "docid": "ecd8f70442aa40cd2088f4324fe0d247", "text": "Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms to scale to high dimensions. However, a central question remains: How to specify an expressive variational distribution that maintains efficient computation? To address this, we develop hierarchical variational models (HVMs). HVMs augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables. The algorithm we develop is black box, can be used for any HVM, and has the same computational efficiency as the original approximation. We study HVMs on a variety of deep discrete latent variable models. HVMs generalize other expressive variational distributions and maintains higher fidelity to the posterior.", "title": "" } ]
scidocsrr
f25236a5f361a400ffec371604158688
Music therapy for depression.
[ { "docid": "f84f279b6ef3b112a0411f5cba82e1b0", "text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed", "title": "" } ]
[ { "docid": "97c97bb10c66c4733ca3e2ab3c0ae5cc", "text": "people by businesses and governments is ubiquitous. One of the main threats to people's privacy comes from human carelessness with this information, yet little empirical research has studied behaviors associated with information carelessness and the ways that people exploit this vulnerability. The studies that have investigated this important question have not been grounded in theory. In particular , the extant literature reveals little about social engineering threats and the reasons why people may or may not fall victim. Synthesizing theory from the marketing literature to explain consumer behavior, an empirical field study was conducted to see if factors that account for successful marketing campaigns may also account for successful social engineering attacks. People have become increasingly aware of the pervasive threats to information security and there are a variety of solutions now available for solving the problem of information insecurity such as improving technologies, including the application of advanced cryptography, or techniques, such as performing risk analyses and risk mitigation (Bresz, 2004; Sasse, Brostoff & Weirich, 2004). There has also been important suggestions from the information systems (IS) security literature that include augmenting security procedures as a solution (cf. Debar & Viinikka 2006), addressing situational factors such as reducing workload so that security professionals have time to implement the recommended procedures (Albrechtsen, 2007), improving the quality of policies (von Solms & von Solms, 2004), improving the alignment between an organization's security goals and its practices (Leach, 2003), and gaining improvements from software developers regarding the security implementations during the software development cycle (Jones & Rastogi, 2004). Yet despite all these recommendations, people often fail to take basic security precautions that result in billions of dollars annually in individual and corporate losses and even \" Knowing better, but not doing better \" is thus one of the key scholarly and practical issues that have not been fully addressed. One area of particular concern involves threats from social engineering. Social engineering consists of techniques used to manipulate people into performing actions or divulging confidential information (Mitnick & Simon, 2002). Social engineers often attempt to persuade potential victims with appeals to strong emotions such as excitement or fear, whereas others utilize ways to establish interpersonal relationships or create a feeling of trust and commitment (Gao & Kim, 2007). For example, they may promise that valuable prize or interest from a transfer bank deposit will be given if the victim complies with a request …", "title": "" }, { "docid": "085155ebfd2ac60ed65293129cb0bfee", "text": "Today, Convolution Neural Networks (CNN) is adopted by various application areas such as computer vision, speech recognition, and natural language processing. Due to a massive amount of computing for CNN, CNN running on an embedded platform may not meet the performance requirement. In this paper, we propose a system-on-chip (SoC) CNN architecture synthesized by high level synthesis (HLS). HLS is an effective hardware (HW) synthesis method in terms of both development effort and performance. However, the implementation should be optimized carefully in order to achieve a satisfactory performance. Thus, we apply several optimization techniques to the proposed CNN architecture to satisfy the performance requirement. The proposed CNN architecture implemented on a Xilinx's Zynq platform has achieved 23% faster and 9.05 times better throughput per energy consumption than an implementation on an Intel i7 Core processor.", "title": "" }, { "docid": "5d79d7e9498d7d41fbc7c70d94e6a9ae", "text": "Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zeroshot affordance prediction and object recognition given human poses.", "title": "" }, { "docid": "1d1db0c5943e6141d0d62c20d706a51f", "text": "The use of renewable energy source (RES) in meet the demand of electrical energy is getting into attention as solution of the problem a deficit of electrical energy. Application of RES in electricity generation system is done in a variety of configurations, among others in microgrid system. Implementation of microgrid systems provide many advantages both from the user and from the electric utility provider. Many microgrid development carried out in several countries, because microgrid offers many advantages, including better power quality and more environmentally friendly. Microgrid development concern in technology generation, microgrid architecture, power electronics, control systems, protection systems. This paper reviewing various technological developments related to microgrid system and case study about microgrid system development using grid tie inverter (GTI). Microgrid system can implemented using GTI, power transfer can occur from GTI to grid when GTI has power excess and grid supplying power to GTI when GTI power shortage.", "title": "" }, { "docid": "89bcf5b0af2f8bf6121e28d36ca78e95", "text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4", "title": "" }, { "docid": "a10a51d1070396e1e8a8b186af18f87d", "text": "An upcoming trend for automobile manufacturers is to provide firmware updates over the air (FOTA) as a service. Since the firmware controls the functionality of a vehicle, security is important. To this end, several secure FOTA protocols have been developed. However, the secure FOTA protocols only solve the security for the transmission of the firmware binary. Once the firmware is downloaded, an attacker could potentially modify its contents before it is flashed to the corresponding ECU'S ROM. Thus, there is a need to extend the flashing procedure to also verify that the correct firmware has been flashed to the ECU. We present a framework for self-verification of firmware updates over the air. We include a verification code in the transmission to the vehicle, and after the firmware has been flashed, the integrity of the memory contents can be verified using the verification code. The verification procedure entails only simple hash functions and is thus suitable for the limited resources in the vehicle. Virtualization techniques are employed to establish a trusted computing base in the ECU, which is then used to perform the verification. The proposed framework allows the ECU itself to perform self-verification and can thus ensure the successful flashing of the firmware.", "title": "" }, { "docid": "298cee7d5283cae1debcaf40ce18eb42", "text": "Fluidic circuits made up of tiny chambers, conduits, and membranes can be fabricated in soft substrates to realize pressure-based sequential logic functions. Additional chambers in the same substrate covered with thin membranes can function as bubble-like tactile features. Sequential addressing of bubbles with fluidic logic enables just two external electronic valves to control of any number of tactile features by \"clocking in\" pressure states one at a time. But every additional actuator added to a shift register requires an additional clock pulse to address, so that the display refresh rate scales inversely with the number of actuators in an array. In this paper, we build a model of a fluidic logic circuit that can be used for sequential addressing of bubble actuators. The model takes the form of a hybrid automaton combining the discrete dynamics of valve switching and the continuous dynamics of compressible fluid flow through fluidic resistors (conduits) and capacitors (chambers). When parameters are set according to the results of system identification experiments on a physical prototype, pressure trajectories and propagation delays predicted by simulation of the hybrid automaton compare favorably to experiment. The propagation delay in turn determines the maximum clock rate and associated refresh rate for a refreshable braille display intended for rendering a full page of braille text or tactile graphics.", "title": "" }, { "docid": "1e8195deeecb793c65b02924f2da3ef2", "text": "This paper provides an introductory survey of a class of optimization problems known as bilevel programming. We motivate this class through a simple application, and then proceed with the general formulation of bilevel programs. We consider various cases (linear, linear-quadratic, nonlinear), describe their main properties and give an overview of solution approaches.", "title": "" }, { "docid": "bd12f418cd731f9103a3d47ebac6951b", "text": "Smartphones and tablets provide access to the Web anywhere and anytime. Automatic Text Summarization techniques aim to extract the fundamental information in documents. Making automatic summarization work in portable devices is a challenge, in several aspects. This paper presents an automatic summarization application for Android devices. The proposed solution is a multi-feature language independent summarization application targeted at news articles. Several evaluation assessments were conducted and indicate that the proposed solution provides good results.", "title": "" }, { "docid": "f6078d97e0d0baf7e483824993f79c2c", "text": "In this paper, we propose a novel Convolutional Neural Network (CNN) structure for general-purpose multi-task learning (MTL), which enables automatic feature fusing at every layer from different tasks. This is in contrast with the most widely used MTL CNN structures which empirically or heuristically share features on some specific layers (e.g., share all the features except the last convolutional layer). The proposed layerwise feature fusing scheme is formulated by combining existing CNN components in a novel way, with clear mathematical interpretability as discriminative dimensionality reduction, which is referred to as Neural Discriminative Dimensionality Reduction (NDDR). Specifically, we first concatenate features with the same spatial resolution from different tasks according to their channel dimension. Then, we show that the discriminative dimensionality reduction can be fulfilled by 1 × 1 Convolution, Batch Normalization, and Weight Decay in one CNN. The use of existing CNN components ensures the end-to-end training and the extensibility of the proposed NDDR layer to various state-of-the-art CNN architectures in a “plug-andplay” manner. The detailed ablation analysis shows that the proposed NDDR layer is easy to train and also robust to different hyperparameters. Experiments on different task sets with various base network architectures demonstrate the promising performance and desirable generalizability of our proposed method.", "title": "" }, { "docid": "20cbfe9c1d20bfd67bbcbf39641aa69a", "text": "The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.", "title": "" }, { "docid": "39971848bd1020694676b530b3e6540b", "text": "This paper presents an unequal Wilkinson power divider operating at arbitrary dual band without reactive components (such as inductors and capacitors). To satisfy the unequal characteristic, a novel structure is proposed with two groups of transmission lines and two parallel stubs. Closed-form equations containing all parameters of this structure are derived based on circuit theory and transmission line theory. For verification, two groups of experimental results including open and short stubs are presented. It can be found that all the analytical features of this unequal power divider can be fulfilled at arbitrary dual band simultaneously.", "title": "" }, { "docid": "8074d30cb422922bc134d07547932685", "text": "Research paper recommenders emerged over the last decade to ease finding publications relating to researchers' area of interest. The challenge was not just to provide researchers with very rich publications at any time, any place and in any form but to also offer the right publication to the right researcher in the right way. Several approaches exist in handling paper recommender systems. However, these approaches assumed the availability of the whole contents of the recommending papers to be freely accessible, which is not always true due to factors such as copyright restrictions. This paper presents a collaborative approach for research paper recommender system. By leveraging the advantages of collaborative filtering approach, we utilize the publicly available contextual metadata to infer the hidden associations that exist between research papers in order to personalize recommendations. The novelty of our proposed approach is that it provides personalized recommendations regardless of the research field and regardless of the user's expertise. Using a publicly available dataset, our proposed approach has recorded a significant improvement over other baseline methods in measuring both the overall performance and the ability to return relevant and useful publications at the top of the recommendation list.", "title": "" }, { "docid": "b23230f0386f185b7d5eb191034d58ec", "text": "Risk management in global information technology (IT) projects is becoming a critical area of concern for practitioners. Global IT projects usually span multiple locations involving various culturally diverse groups that use multiple standards and technologies. These multiplicities cause dynamic risks through interactions among internal (i.e., people, process, and technology) and external elements (i.e., business and natural environments) of global IT projects. This study proposes an agile risk-management framework for global IT project settings. By analyzing the dynamic interactions among multiplicities (e.g., multi-locations, multi-cultures, multi-groups, and multi-interests) embedded in the project elements, we identify the dynamic risks threatening the success of a global IT project. Adopting the principles of service-oriented architecture (SOA), we further propose a set of agile management strategies for mitigating the dynamic risks. The mitigation strategies are conceptually validated. The proposed framework will help practitioners understand the potential risks in their global IT projects and resolve their complex situations when certain types of dynamic risks arise.", "title": "" }, { "docid": "100da900b23fbf4a9645907d89d730af", "text": "This paper describes the design and manufacturing of soft artificial skin with an array of embedded soft strain sensors for detecting various hand gestures by measuring joint motions of five fingers. The proposed skin was made of a hyperelastic elastomer material with embedded microchannels filled with two different liquid conductors, an ionic liquid and a liquid metal. The ionic liquid microchannels were used to detect the mechanical strain changes of the sensing material, and the liquid metal microchannels were used as flexible and stretchable electrical wires for connecting the sensors to an external control circuit. The two heterogeneous liquid conductors were electrically interfaced through flexible conductive threads to prevent the two liquid from being intermixed. The skin device was connected to a computer through a microcontroller instrumentation circuit for reconstructing the 3-D hand motions graphically. The paper also presents preliminary calibration and experimental results.", "title": "" }, { "docid": "cb319d8543639c6bbf9c8c0ab79be640", "text": "In this paper we proposed reinforcement learning algorithms with the generalized reward function. In our proposed method we use Q-learning and SARSA algorithms with generalised reward function to train the reinforcement learning agent. We evaluated the performance of our proposed algorithms on two real-time strategy games called BattleCity and S3. There are two main advantages of having such an approach as compared to other works in RTS. (1) We can ignore the concept of a simulator which is often game specific and is usually hard coded in any type of RTS games (2) our system can learn from interaction with any opponents and quickly change the strategy according to the opponents and do not need any human traces as used in previous works.", "title": "" }, { "docid": "4a81bfdcd2c3d543d2cb182fef28da6c", "text": "A novel printed compact wide-band planar antenna for mobile handsets is proposed and analyzed in this paper. The radiating patch of the proposed antenna is designed jointly with the shape of the ground plane. A prototype of the proposed antenna with 30 mm in height and 50 mm in width has been fabricated and tested. Its operating bandwidth with voltage standing wave ratio (VSWR) lower than 3:1 is 870-2450 MHz, which covers the global system for mobile communication (GSM, 890-960 MHz), the global positioning system (GPS, 1575.42 MHz), digital communication system (DCS, 1710-1880 MHz), personal communication system (PCS, 1850-1990 MHz), universal mobile telecommunication system (UMTS, 1920-2170 MHz), and wireless local area network (WLAN, 2400-2484 MHz) bands. Therefore, it could be applicable for the existing and future mobile communication systems. Design details and experimental results are also presented and discussed.", "title": "" }, { "docid": "2ef92113a901df268261be56f5110cfa", "text": "This paper studies the problem of finding a priori shortest paths to guarantee a given likelihood of arriving on-time in a stochastic network. Such ‘‘reliable” paths help travelers better plan their trips to prepare for the risk of running late in the face of stochastic travel times. Optimal solutions to the problem can be obtained from local-reliable paths, which are a set of non-dominated paths under first-order stochastic dominance. We show that Bellman’s principle of optimality can be applied to construct local-reliable paths. Acyclicity of local-reliable paths is established and used for proving finite convergence of solution procedures. The connection between the a priori path problem and the corresponding adaptive routing problem is also revealed. A label-correcting algorithm is proposed and its complexity is analyzed. A pseudo-polynomial approximation is proposed based on extreme-dominance. An extension that allows travel time distribution functions to vary over time is also discussed. We show that the time-dependent problem is decomposable with respect to arrival times and therefore can be solved as easily as its static counterpart. Numerical results are provided using typical transportation networks. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "678a4872dfe753bac26bff2b29ac26b0", "text": "Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks.", "title": "" }, { "docid": "78582e3594deb53149422cc41387e330", "text": "Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity, which appears to have potential application to a wide variety of relatively short (greater than 100 points) and noisy time-series data. The development of ApEn was motivated by data length constraints commonly encountered, e.g., in heart rate, EEG, and endocrine hormone secretion data sets. We describe ApEn implementation and interpretation, indicating its utility to distinguish correlated stochastic processes, and composite deterministic/ stochastic models. We discuss the key technical idea that motivates ApEn, that one need not fully reconstruct an attractor to discriminate in a statistically valid manner-marginal probability distributions often suffice for this purpose. Finally, we discuss why algorithms to compute, e.g., correlation dimension and the Kolmogorov-Sinai (KS) entropy, often work well for true dynamical systems, yet sometimes operationally confound for general models, with the aid of visual representations of reconstructed dynamics for two contrasting processes. (c) 1995 American Institute of Physics.", "title": "" } ]
scidocsrr
ddc8f3db8fc8ec6add29047c7f624ad2
Language identification with suprasegmental cues: a study based on speech resynthesis.
[ { "docid": "d3b248232b7a01bba1d165908f55a316", "text": "Two views of bilingualism are presented--the monolingual or fractional view which holds that the bilingual is (or should be) two monolinguals in one person, and the bilingual or wholistic view which states that the coexistence of two languages in the bilingual has produced a unique and specific speaker-hearer. These views affect how we compare monolinguals and bilinguals, study language learning and language forgetting, and examine the speech modes--monolingual and bilingual--that characterize the bilingual's everyday interactions. The implications of the wholistic view on the neurolinguistics of bilingualism, and in particular bilingual aphasia, are discussed.", "title": "" } ]
[ { "docid": "6f064ed38a1c3f1550dcc698aa78c765", "text": "The purpose of this paper is to investigate Iranian consumers' perception of German made Mercedes Benz and Japanese made Lexus luxury automobiles brands, that include five perceived values: conspicuousness, uniqueness, social, hedonic and quality on purchase intention. Also, the effects of demographic factors, like identifying the luxury brand preference on purchase intention of luxury automobiles in Iranian consumers, while comparing between them and inferring management implications from implementing data analysis. Based on a thorough review of the literature, 16 hypotheses are derived and tested using Structural Equation Modeling (SEM) and Confirmatory Factor Analysis (CFA), measures ANOVA also was used to analyze the relationship between demographic and Iranian consumers luxury brand perception. The final sample consists of a total of 390 participants. The main findings show that variables of hedonic, uniqueness and quality value were significantly higher than conspicuous and social values. They have more of a role in forming of luxury brand perception in Iranian Consumers. This study is useful for marketers to understand their target market and how their customers evaluate products and make buying decisions. The five perceived values of luxury automobiles can be used as guidelines for salesmen to sell successfully to customers.", "title": "" }, { "docid": "8e7fa0b19ddd1e5d4696c4c0389d57c3", "text": "Research in emotion analysis of text suggest that emotion lexicon based features are superior to corpus based n-gram features. However the static nature of the general purpose emotion lexicons make them less suited to social media analysis, where the need to adopt to changes in vocabulary usage and context is crucial. In this paper we propose a set of methods to extract a word-emotion lexicon automatically from an emotion labelled corpus of tweets. Our results confirm that the features derived from these lexicons outperform the standard Bag-of-words features when applied to an emotion classification task. Furthermore, a comparative analysis with both manually crafted lexicons and a state-of-the-art lexicon generated using Point-Wise Mutual Information, show that the lexicons generated from the proposed methods lead to significantly better classification performance.", "title": "" }, { "docid": "301dfbe40f35d258e457d4f234554031", "text": "in a seismic change in the touch-screen business. Projected capacitive (pro-cap), the touch technology used in the iPhone touch screen, has become the first choice for many small-to-medium (<10-in.) touch-equipped products now in development. The technology is not just Apple-trendy but incorporates some of the best characteristics of competing touch technologies. The three most important advantages of pro-cap technology are as follows:", "title": "" }, { "docid": "904454a191da497071ee9b835561c6e6", "text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.", "title": "" }, { "docid": "c155ce2743c59f4ce49fdffe74d94443", "text": "The theta oscillation (5-10Hz) is a prominent behavior-specific brain rhythm. This review summarizes studies showing the multifaceted role of theta rhythm in cognitive functions, including spatial coding, time coding and memory, exploratory locomotion and anxiety-related behaviors. We describe how activity of hippocampal theta rhythm generators - medial septum, nucleus incertus and entorhinal cortex, links theta with specific behaviors. We review evidence for functions of the theta-rhythmic signaling to subcortical targets, including lateral septum. Further, we describe functional associations of theta oscillation properties - phase, frequency and amplitude - with memory, locomotion and anxiety, and outline how manipulations of these features, using optogenetics or pharmacology, affect associative and innate behaviors. We discuss work linking cognition to the slope of the theta frequency to running speed regression, and emotion-sensitivity (anxiolysis) to its y-intercept. Finally, we describe parallel emergence of theta oscillations, theta-mediated neuronal activity and behaviors during development. This review highlights a complex interplay of neuronal circuits and synchronization features, which enables an adaptive regulation of multiple behaviors by theta-rhythmic signaling.", "title": "" }, { "docid": "19e070089a8495a437e81da50f3eb21c", "text": "Mobile payment refers to the use of mobile devices to conduct payment transactions. Users can use mobile devices for remote and proximity payments; moreover, they can purchase digital contents and physical goods and services. It offers an alternative payment method for consumers. However, there are relative low adoption rates in this payment method. This research aims to identify and explore key factors that affect the decision of whether to use mobile payments. Two well-established theories, the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT), are applied to investigate user acceptance of mobile payments. Survey data from mobile payments users will be used to test the proposed hypothesis and the model.", "title": "" }, { "docid": "2caea7f13980ea4a48fb8e8bb71842f1", "text": "Internet of Things, commonly known as IoT is a promising area in technology that is growing day by day. It is a concept whereby devices connect with each other or to living things. Internet of Things has shown its great benefits in today’s life. Agriculture is one amongst the sectors which contributes a lot to the economy of Mauritius and to get quality products, proper irrigation has to be performed. Hence proper water management is a must because Mauritius is a tropical island that has gone through water crisis since the past few years. With the concept of Internet of Things and the power of the cloud, it is possible to use low cost devices to monitor and be informed about the status of an agricultural area in real time. Thus, this paper provides the design and implementation of a Smart Irrigation and Monitoring System which makes use of Microsoft Azure machine learning to process data received from sensors in the farm and weather forecasting data to better inform the farmers on the appropriate moment to start irrigation. The Smart Irrigation and Monitoring System is made up of sensors which collect data such as air humidity, air temperature, and most importantly soil moisture data. These data are used to monitor the air quality and water content of the soil. The raw data are transmitted to the", "title": "" }, { "docid": "242089e8f694a83cb4432862f2d6b1fc", "text": "We present an interpretable framework for path prediction that leverages dependencies between agents’ behaviors and their spatial navigation environment. We exploit two sources of information: the past motion trajectory of the agent of interest and a wide top-view image of the navigation scene. We propose a Clairvoyant Attentive Recurrent Network (CAR-Net) that learns where to look in a large image of the scene when solving the path prediction task. Our method can attend to any area, or combination of areas, within the raw image (e.g., road intersections) when predicting the trajectory of the agent. This allows us to visualize fine-grained semantic elements of navigation scenes that influence the prediction of trajectories. To study the impact of space on agents’ trajectories, we build a new dataset made of top-view images of hundreds of scenes (Formula One racing tracks) where agents’ behaviors are heavily influenced by known areas in the images (e.g., upcoming turns). CAR-Net successfully attends to these salient regions. Additionally, CAR-Net reaches state-of-the-art accuracy on the standard trajectory forecasting benchmark, Stanford Drone Dataset (SDD). Finally, we show CAR-Net’s ability to generalize to unseen scenes.", "title": "" }, { "docid": "71b132b4c056e71c7f3ec5a600d1368e", "text": "In this paper we have shown emotion recognition through EEG processing. Based on the literature work, we have made the best selection of the techniques available so far. This paper is easy to perceive, understand, and interpret the proposed method and the techniques like signal preprocessing, feature extraction and, signal classification in BCI. Our aim in this paper is to come up with an efficient and reliable emotion recognition system. Keywords— Computer-Interface, Signal Processing, Feature Extraction, EEG signals.", "title": "" }, { "docid": "88f008c70ce9cf5a468b08f25f62bdf9", "text": "Cable-suspended robots are structurally similar to parallel actuated robots but with the fundamental difference that cables can only pull the end-effector but not push it. From a scientific point of view, this feature makes feedback control of cable-suspended robots lot more challenging than their counterpart parallel actuated robots. In this brief, we look into the control design for a nonredundant cable-suspended robot under positive input constraints. The design is based on feedback linearization controllers augmented with a reference governor (RG). This RG operates in accordance with the receding horizon strategy, by generating admissible reference signals, that do not violate the input constraints. An important issue in implementing such an algorithm for nonlinear systems is to predict the system behavior in a computationally efficient way. We show that feedback linearization controllers with the RG can offer an efficient way to predict the system’s future states, using the error dynamics of inner feedback loop. Finally, the effectiveness of the proposed method is illustrated by numerical simulation and laboratory experiments on a 6-degree-of-freedom cable suspended robot.", "title": "" }, { "docid": "9e2caec1b8a56c4d0295e4d5d6fbcee9", "text": "Big data defines huge, diverse and fast growing data which requires new technologies to handle. With the rapid growth of data, big data has brought attention of researchers to use it in most prominent way for decision making in various emerging applications. These huge data is extremely useful and valuable for scientific exploration, increase productivity in business and improvement in mankind. It helps from public sector to business activities, healthcare to better navigation, smart cities to national security. Though, with large opportunities to work, the challenges are to handle these data is also increased. In this paper basic of big data with its application and challenges have been discussed. These challenges are also inherent from verity, volume and velocity of data. However if we can manage this issues related to big data then there will be potential improvement in quality of our lives.", "title": "" }, { "docid": "fd16f89880144a8f23c74527980de4d0", "text": "In this paper, a multilevel inverter system for an open-end winding induction motor drive is described. Multilevel inversion is achieved by feeding an open-end winding induction motor with two two-level inverters in cascade (equivalent to a three-level inverter) from one end and a single two-level inverter from the other end of the motor. The combined inverter system with open-end winding induction motor produces voltage space-vector locations identical to a six-level inverter. A total of 512 space-vector combinations are available in the proposed scheme, distributed over 91 space-vector locations. The proposed inverter drive scheme is capable of producing a multilevel pulsewidth-modulation (PWM) waveform for the phase voltage ranging from a two-level waveform to a six-level waveform depending on the modulation range. A space-vector PWM scheme for the proposed drive is implemented using a 1.5-kW induction motor with open-end winding structure.", "title": "" }, { "docid": "e2630765e2fa4b203a4250cb5b69b9e9", "text": "Thirteen years have passed since Karl Sims published his work onevolving virtual creatures. Since then,several novel approaches toneural network evolution and genetic algorithms have been proposed.The aim of our work is to apply recent results in these areas tothe virtual creatures proposed by Karl Sims, leading to creaturescapable of solving more complex tasks. This paper presents oursuccess in reaching the first milestone -a new and completeimplementation of the original virtual creatures. All morphologicaland control properties of the original creatures were implemented.Laws of physics are simulated using ODE library. Distributedcomputation is used for CPU-intensive tasks, such as fitnessevaluation.Experiments have shown that our system is capable ofevolving both morphology and control of the creatures resulting ina variety of non-trivial swimming and walking strategies.", "title": "" }, { "docid": "7c9be363cf760d03aab0b6bffd764676", "text": "Many children and youth in rural communities spend significant portions of their lives on school buses. This paper reviews the limited empirical research on the school bus experience, presents some new exploratory data, and offers some suggestions for future research on the impact of riding the school bus on children and youth.", "title": "" }, { "docid": "bd5d84f20dcf130ea8b8d621befcb0dd", "text": "The output of convolutional neural networks (CNNs) has been shown to be discontinuous which can make the CNN image classifier vulnerable to small well-tuned artificial perturbation. That is, images modified by conducting such alteration (i.e., adversarial perturbation) that make little difference to the human eyes can completely change the CNN classification results. In this paper, we propose a practical attack using differential evolution (DE) for generating effective adversarial perturbations. We comprehensively evaluate the effectiveness of different types of DEs for conducting the attack on different network structures. The proposed method only modifies five pixels (i.e., few-pixel attack), and it is a black-box attack which only requires the miracle feedback of the target CNN systems. The results show that under strict constraints which simultaneously control the number of pixels changed and overall perturbation strength, attacking can achieve 72.29%, 72.30%, and 61.28% non-targeted attack success rates, with 88.68%, 83.63%, and 73.07% confidence on average, on three common types of CNNs. The attack only requires modifying five pixels with 20.44, 14.28, and 22.98 pixel value distortion. Thus, we show that current deep neural networks are also vulnerable to such simpler black-box attacks even under very limited attack conditions.", "title": "" }, { "docid": "83696ddab94d293e0d28172c200709e0", "text": "Traffic sign detection plays an important role in driving assistance systems and traffic safety. But the existing detection methods are usually limited to a predefined set of traffic signs. Therefore we propose a traffic sign detection algorithm based on deep Convolutional Neural Network (CNN) using Region Proposal Network(RPN) to detect all Chinese traffic sign. Firstly, a Chinese traffic sign dataset is obtained by collecting seven main categories of traffic signs and their subclasses. Then a traffic sign detection CNN model is trained and evaluated by fine-tuning technology using the collected dataset. Finally, the model is tested by 33 video sequences with the size of 640×480. The result shows that the proposed method has towards real-time detection speed and above 99% detection precision. The trained model can be used to capture the traffic sign from videos by on-board camera or driving recorder and construct a complete traffic sign dataset.", "title": "" }, { "docid": "b24fa0e9c208bf8ea0ea5f3fe0453884", "text": "Bacteria and fungi are ubiquitous in the atmosphere. The diversity and abundance of airborne microbes may be strongly influenced by atmospheric conditions or even influence atmospheric conditions themselves by acting as ice nucleators. However, few comprehensive studies have described the diversity and dynamics of airborne bacteria and fungi based on culture-independent techniques. We document atmospheric microbial abundance, community composition, and ice nucleation at a high-elevation site in northwestern Colorado. We used a standard small-subunit rRNA gene Sanger sequencing approach for total microbial community analysis and a bacteria-specific 16S rRNA bar-coded pyrosequencing approach (4,864 sequences total). During the 2-week collection period, total microbial abundances were relatively constant, ranging from 9.6 x 10(5) to 6.6 x 10(6) cells m(-3) of air, and the diversity and composition of the airborne microbial communities were also relatively static. Bacteria and fungi were nearly equivalent, and members of the proteobacterial groups Burkholderiales and Moraxellaceae (particularly the genus Psychrobacter) were dominant. These taxa were not always the most abundant in freshly fallen snow samples collected at this site. Although there was minimal variability in microbial abundances and composition within the atmosphere, the number of biological ice nuclei increased significantly during periods of high relative humidity. However, these changes in ice nuclei numbers were not associated with changes in the relative abundances of the most commonly studied ice-nucleating bacteria.", "title": "" }, { "docid": "0ab2034bf15846577b94acb7df674137", "text": "Purpose-French universities can play a key role in generating Smart City approach through an innovative Public-Private Partnership dedicated to urban transformation. Methodology-We led an action-research study for five years with several research and pedagogic projects including users or citizens. Findings-The paper points out main factors of Smart City development. It also presents shared demonstrators’ characteristics including industrial scale, sustainability and citizens’ participation. Practical implications-University of Lorraine diversification strategy through the “Chaire REVES” supported by public and private partners. Social implications-At regional level, industrial-university-territorial partnerships could tackle both societal and economical issues “with”, “for”, and “by” citizens. Originality/value-Based on the Living Lab concept our case study shows a concrete regional university strategy involving: user-centric design, collaborative processes, citizens’ workshops and new financial and organizational answers enabling collaboration between private companies and public institutions. Our paper also argues that innovative public and private partnership involving users are necessary for developing smart cities. KeywordsSmart City, Living Lab, Public-Private Partnership, Collaborative innovation, sustainable urban transformation, shared demonstrator, diversification strategy Paper typeCase study", "title": "" }, { "docid": "2e0e54bd8d8bbaac19f861c951e80033", "text": "Self-service systems, online help systems, web services, mobile communication devices, remote control systems, and dashboard computers are providing ever more functionality. However, along with greater functionality, the user must also come to terms with the greater complexity and a steeper learning curve. This complexity is compounded by the sheer proliferation of different systems lacking a standard user interface. Conversational user interfaces allow various natural communication modes like speech, gestures and facial expressions for input as well as output and exploit the context in which an input is used to compute its meaning. The growing emphasis on conversational user interfaces is fundamentally inspired by the aim to support natural, flexible, efficient and powerfully expressive means of human-computer communication that are easy to learn and use. Advances in human language technology and intelligent user interfaces offer the promise of pervasive access to online information and web services. The development of conversational user interfaces allows the average person to interact with computers anytime and anywhere without special skills or training, using such common devices as a mobile phone. Advanced conversational user interfaces include the situated understanding of possibly imprecise, ambiguous or incomplete multimodal input and the generation of coordinated, cohesive, and coherent multimodal presentations. In conversational user interfaces the dialogue management is based on representing, reasoning, and exploiting models of the user, domain, task, context, and modalities. These systems are capable of real-time dialogue processing, including flexible multimodal turn-taking, backchanneling, and metacommunicative interaction. One important aspect of conversations is that the successive utterances of which it consists are often interconnected by cross references of various sorts. For instance, one utterance will use a pronoun to refer to something mentioned in the previous utterance. Computational models of discourse must be able to represent, compute and resolve such cross references. Conversational user interfaces differ in the degree with which the user or the system controls the conversation. In directed or menubased dialogues the system maintains tight control and the human is highly restricted in his dialogue behavior, whereas in free-form dialogue the human takes complete control and the system is totally passive. In mixed-initiative conversational user interfaces, the dialogue control moves back and forth between the system and the user like in most face-to-face conversations between humans. Four papers in this special issue deal with conversational user interfaces that use speech as the main mode of interaction. The paper by Helbig and Schindler discusses state-of-art component technologies and requirements for the successful deployment of conversational user interfaces in industrial environments such as logistics centers, assembly lines, and car inspection facilities. It shows that the speech recognition rate in such environments is still depending on the correct positioning and adjustment of the microphone and discusses the need for wireless microphones in most industrial applications of spoken dialogue systems. Block, Caspari and Schachtl describe an innovative dialogue engine for the Virtual Call Center Agent (ViCA), that provides access to product documentation. A multiframe based dialogue engine is introduced that supports natural conversations by allowing over-answering and free-order information input. The paper reports encouraging results from a usability test showing a high task completion rate. The paper by te Vrugt and Portele describes a tasked-oriented spoken dialogue system that allows the user to control a wide spectrum of infotainment applications, like a hard-disk recorder, an image browser, a music player, a TV set and an electronic program guide. The paper presents a flexible framework for such a multi-application dialogue system and an applicationindependent scheme for dialogue processing. Nöth et al. describe lessons learnt from the implementation of three commercially deployed conversational interfaces. The authors propose five guidelines, which they consider to be crucial, when building and operating telephone-based dialogue systems. One of the guidelines concerns the fact that a spoken dialogue system must react fast to any kind of user input, no matter", "title": "" }, { "docid": "d1940745dcc684006037ad099697c4a4", "text": "On a day in November, the body of a 31-year-old man was found near a swimming lake with two open and partly emptied fish tins lying next to him. Further investigations showed that the man had been allergic to fish protein and suffered from severe depression and drug psychosis. Already some days before the suicide, he had repeatedly asked for fish to kill himself. Although the results of the chemical and toxicological examinations were negative, the autopsy findings and histological tests suggest that death was caused by an anaphylactic reaction.", "title": "" } ]
scidocsrr
49db1665af3fd59eb2ae659d526d6842
Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks
[ { "docid": "c92e7bf3b01e8beaf4d24ec2f6ae805e", "text": "In this work, we introduce a dataset of video annotated with high quality natural language phrases describing the visual content in a given segment of time. Our dataset is based on the Descriptive Video Service (DVS) that is now encoded on many digital media products such as DVDs. DVS is an audio narration describing the visual elements and actions in a movie for the visually impaired. It is temporally aligned with the movie and mixed with the original movie soundtrack. We describe an automatic DVS segmentation and alignment method for movies, that enables us to scale up the collection of a DVS-derived dataset with minimal human intervention. Using this method, we have collected the largest DVS-derived dataset for video description of which we are aware. Our dataset currently includes over 84.6 hours of paired video/sentences from 92 DVDs and is growing.", "title": "" }, { "docid": "e04dda55d05d15e6a2fb3680a603bd43", "text": "Multilayer perceptrons (MLPs) or neural networks are popular models used for nonlinear regression and classification tasks. As regressors, MLPs model the conditional distribution of the predictor variables Y given the input variables X . However, this predictive distribution is assumed to be unimodal (e.g. Gaussian). For tasks involving structured prediction, the conditional distribution should be multi-modal, resulting in one-to-many mappings. By using stochastic hidden variables rather than deterministic ones, Sigmoid Belief Nets (SBNs) can induce a rich multimodal distribution in the output space. However, previously proposed learning algorithms for SBNs are not efficient and unsuitable for modeling real-valued data. In this paper, we propose a stochastic feedforward network with hidden layers composed of both deterministic and stochastic variables. A new Generalized EM training procedure using importance sampling allows us to efficiently learn complicated conditional distributions. Our model achieves superior performance on synthetic and facial expressions datasets compared to conditional Restricted Boltzmann Machines and Mixture Density Networks. In addition, the latent features of our model improves classification and can learn to generate colorful textures of objects.", "title": "" } ]
[ { "docid": "970626f1586f8053ea2d7d9a3a0c723d", "text": "The aim of this paper is to provide an efficient frequency-domain method for bifurcation analysis of nonlinear dynamical systems. The proposed method consists in directly tracking the bifurcation points when a system parameter such as the excitation or nonlinearity level is varied. To this end, a so-called extended system comprising the equation of motion and an additional equation characterizing the bifurcation of interest is solved by means of the Harmonic Balance Method coupled with an arc-length continuation technique. In particular, an original extended system for the detection and tracking of Neimark-Sacker (secondary Hopf) bifurcations is introduced. By applying the methodology to a nonlinear energy sink and to a rotor-stator rubbing system, it is shown that the bifurcation tracking can be used to efficiently compute the boundaries of stability and/or dynamical regimes, i.e., safe operating zones.", "title": "" }, { "docid": "ae3ccd3698a5b96243a223d41ee4ece4", "text": "In this paper, we introduce a new approach to image retrieval. This new approach takes the best from two worlds, combines image features (content) and words from collateral text (context) into one semantic space. Our approach uses Latent Semantic Indexing, a method that uses co-occurrence statistics to uncover hidden semantics. This paper shows how this method, that has proven successful in both monolingual and cross lingual text retrieval, can be used for multi-modal and cross-modal information retrieval. Experiments with an on-line newspaper archive show that Latent Semantic Indexing can outperform both content based and context based approaches and that it is a promising approach for indexing visual and multi-modal data.", "title": "" }, { "docid": "5e63c7f6d86b634d8a2b7e0746eaa0d2", "text": "A famous theorem of Szemerédi asserts that given any density 0 < δ ≤ 1 and any integer k ≥ 3, any set of integers with density δ will contain infinitely many proper arithmetic progressions of length k. For general k there are essentially four known proofs of this fact; Szemerédi’s original combinatorial proof using the Szemerédi regularity lemma and van der Waerden’s theorem, Furstenberg’s proof using ergodic theory, Gowers’ proof using Fourier analysis and the inverse theory of additive combinatorics, and the more recent proofs of Gowers and Rödl-Skokan using a hypergraph regularity lemma. Of these four, the ergodic theory proof is arguably the shortest, but also the least elementary, requiring passage (via the Furstenberg correspondence principle) to an infinitary measure preserving system, and then decomposing a general ergodic system relative to a tower of compact extensions. Here we present a quantitative, self-contained version of this ergodic theory proof, and which is “elementary” in the sense that it does not require the axiom of choice, the use of infinite sets or measures, or the use of the Fourier transform or inverse theorems from additive combinatorics. It also gives explicit (but extremely poor) quantitative bounds.", "title": "" }, { "docid": "3304f4d4c936a416b0ced56ee8e96f20", "text": "Big Data analytics plays a key role through reducing the data size and complexity in Big Data applications. Visualization is an important approach to helping Big Data get a complete view of data and discover data values. Big Data analytics and visualization should be integrated seamlessly so that they work best in Big Data applications. Conventional data visualization methods as well as the extension of some conventional methods to Big Data applications are introduced in this paper. The challenges of Big Data visualization are discussed. New methods, applications, and technology progress of Big Data visualization are presented.", "title": "" }, { "docid": "0d2260653f223db82e2e713f211a2ba0", "text": "Smartphone usage is a hot topic in pervasive computing due to their popularity and personal aspect. We present our initial results from analyzing how individual differences, such as gender and age, affect smartphone usage. The dataset comes from a large scale longitudinal study, the Menthal project. We select a sample of 30, 677 participants, from which 16, 147 are males and 14, 523 are females, with a median age of 21 years. These have been tracked for at least 28 days and they have submitted their demographic data through a questionnaire. The ongoing experiment has been started in January 2014 and we have used our own mobile data collection and analysis framework. Females use smartphones for longer periods than males, with a daily mean of 166.78 minutes vs. 154.26 minutes. Younger participants use their phones longer and usage is directed towards entertainment and social interactions through specialized apps. Older participants use it less and mainly for getting information or using it as a classic phone.", "title": "" }, { "docid": "6087ad77caa9947591eb9a3f8b9b342d", "text": "Geobacter sulfurreducens is a well-studied representative of the Geobacteraceae, which play a critical role in organic matter oxidation coupled to Fe(III) reduction, bioremediation of groundwater contaminated with organics or metals, and electricity production from waste organic matter. In order to investigate G. sulfurreducens central metabolism and electron transport, a metabolic model which integrated genome-based predictions with available genetic and physiological data was developed via the constraint-based modeling approach. Evaluation of the rates of proton production and consumption in the extracellular and cytoplasmic compartments revealed that energy conservation with extracellular electron acceptors, such as Fe(III), was limited relative to that associated with intracellular acceptors. This limitation was attributed to lack of cytoplasmic proton consumption during reduction of extracellular electron acceptors. Model-based analysis of the metabolic cost of producing an extracellular electron shuttle to promote electron transfer to insoluble Fe(III) oxides demonstrated why Geobacter species, which do not produce shuttles, have an energetic advantage over shuttle-producing Fe(III) reducers in subsurface environments. In silico analysis also revealed that the metabolic network of G. sulfurreducens could synthesize amino acids more efficiently than that of Escherichia coli due to the presence of a pyruvate-ferredoxin oxidoreductase, which catalyzes synthesis of pyruvate from acetate and carbon dioxide in a single step. In silico phenotypic analysis of deletion mutants demonstrated the capability of the model to explore the flexibility of G. sulfurreducens central metabolism and correctly predict mutant phenotypes. These results demonstrate that iterative modeling coupled with experimentation can accelerate the understanding of the physiology of poorly studied but environmentally relevant organisms and may help optimize their practical applications.", "title": "" }, { "docid": "d817f4d8ba1eb4c4318008edcd3e1f8b", "text": "This paper presents the development of a perception system for indoor environments to allow autonomous navigation for surveillance mobile robots. The system is composed by two parts. The first part is a reactive navigation system in which a mobile robot moves avoiding obstacles in environment, using the distance sensor Kinect. The second part of this system uses a artificial neural network (ANN) to recognize different configurations of the environment, for example, path ahead, left path, right path and intersections. The ANN is trained using data captured by the Kinect sensor in indoor environments. This way, the robot becomes able to perform a topological navigation combining internal reactive behavior to avoid obstacles and the ANN to locate the robot in the environment, in a deliberative behavior. The topological map is represented by a graph which represents the configuration of the environment, where the hallways (path ahead) are the edges and locations (left path and intersection, for example) are the vertices. The system also works in the dark, which is a great advantage for surveillance systems. The experiments were performed with a Pioneer P3-AT robot equipped with a Kinect sensor in order to validate and evaluate this approach. The proposed method demonstrated to be a promising approach to autonomous mobile robots navigation.", "title": "" }, { "docid": "fb83fca1b1ed1fca15542900bdb3748d", "text": "Learning disease severity scores automatically from collected measurements may aid in the quality of both healthcare and scientific understanding. Some steps in that direction have been taken and machine learning algorithms for extracting scoring functions from data have been proposed. Given the rapid increase in both quantity and diversity of data measured and stored, the large amount of information is becoming one of the challenges for learning algorithms. In this work, we investigated the direction of the problemwhere the dimensionality of measured variables is large. Learning the severity score in such cases brings the issue of which of measured features are relevant. We have proposed a novel approach by combining desirable properties of existing formulations, which compares favorably to alternatives in accuracy and especially in the robustness of the learned scoring function.The proposed formulation has a nonsmooth penalty that induces sparsity.This problem is solved by addressing a dual formulationwhich is smooth and allows an efficient optimization.The proposed approachmight be used as an effective and reliable tool for both scoring function learning and biomarker discovery, as demonstrated by identifying a stable set of genes related to influenza symptoms’ severity, which are enriched in immune-related processes.", "title": "" }, { "docid": "0a3eaf68a3f1f2587f2456cbb29e1f06", "text": "OBJECTIVE\nTo develop a single trial motor imagery (MI) classification strategy for the brain-computer interface (BCI) applications by using time-frequency synthesis approach to accommodate the individual difference, and using the spatial patterns derived from electroencephalogram (EEG) rhythmic components as the feature description.\n\n\nMETHODS\nThe EEGs are decomposed into a series of frequency bands, and the instantaneous power is represented by the envelop of oscillatory activity, which forms the spatial patterns for a given electrode montage at a time-frequency grid. Time-frequency weights determined by training process are used to synthesize the contributions from the time-frequency domains.\n\n\nRESULTS\nThe present method was tested in nine human subjects performing left or right hand movement imagery tasks. The overall classification accuracies for nine human subjects were about 80% in the 10-fold cross-validation, without rejecting any trials from the dataset. The loci of MI activity were shown in the spatial topography of differential-mode patterns over the sensorimotor area.\n\n\nCONCLUSIONS\nThe present method does not contain a priori subject-dependent parameters, and is computationally efficient. The testing results are promising considering the fact that no trials are excluded due to noise or artifact.\n\n\nSIGNIFICANCE\nThe present method promises to provide a useful alternative as a general purpose classification procedure for MI classification.", "title": "" }, { "docid": "135513fa93b5fade93db11fdf942fe7a", "text": "This paper describes two techniques that improve throughput in an ad hoc network in the presence of nodes that agree to forward packets but fail to do so. To mitigate this problem, we propose categorizing nodes based upon their dynamically measured behavior. We use a watchdog that identifies misbehaving nodes and a pathrater that helps routing protocols avoid these nodes. Through simulation we evaluate watchdog and pathrater using packet throughput, percentage of overhead (routing) transmissions, and the accuracy of misbehaving node detection. When used together in a network with moderate mobility, the two techniques increase throughput by 17% in the presence of 40% misbehaving nodes, while increasing the percentage of overhead transmissions from the standard routing protocol's 9% to 17%. During extreme mobility, watchdog and pathrater can increase network throughput by 27%, while increasing the overhead transmissions from the standard routing protocol's 12% to 24%.", "title": "" }, { "docid": "4034dbc9f0d4bd3ea10a2c78199acda9", "text": "One of the most important issues for organizations and information technology professionals is the success of information technology (IT) projects. This study reviews a survey of financial executives and examines their views on aspects of project management and project success. First, it was found that overall systems development projects are viewed as being successful by organizations. Next, a series of analyses were performed to assess several variables’ impact on IT project success. Skilled project measurement was found to result in higher IT project success. Restrictions on IT application development were found to correlate to lower IT project success. The most important project consideration did not affect project success. Finally, a significant positive relationship was found between the IT project success and overall IT returns. The implications, limitations, and conclusions of these findings are discussed. The study can be used as a basis for further exploration on project management success, influencing variables, and motivators. The findings can also be used to guide management teams in project management decisions to maximize returns to their organizations. The paper studies a large secondary data sample set, which empirically reviews corporations’ experiences with project management. In addition, it explores variables influencing overall project management success perception. DOI: 10.4018/jitpm.2012070103 32 International Journal of Information Technology Project Management, 3(3), 31-44, July-September 2012 Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. (p. 12). Against this backdrop, some researchers have attempted to analyze either reasons for failure or processes that can lead to success. The validity of these dismal returns has been challenged recently, including even the CHAOS report (Eveleens & Verhoef, 2010). As a result, the actual success rates of information technology projects remain elusive.", "title": "" }, { "docid": "63cf9ef326bbe39aa1ecc86b6b1cb0ce", "text": "Drug delivery systems (DDS) have become important tools for the specific delivery of a large number of drug molecules. Since their discovery in the 1960s liposomes were recognized as models to study biological membranes and as versatile DDS of both hydrophilic and lipophilic molecules. Liposomes--nanosized unilamellar phospholipid bilayer vesicles--undoubtedly represent the most extensively studied and advanced drug delivery vehicles. After a long period of research and development efforts, liposome-formulated drugs have now entered the clinics to treat cancer and systemic or local fungal infections, mainly because they are biologically inert and biocompatible and practically do not cause unwanted toxic or antigenic reactions. A novel, up-coming and promising therapy approach for the treatment of solid tumors is the depletion of macrophages, particularly tumor associated macrophages with bisphosphonate-containing liposomes. In the advent of the use of genetic material as therapeutic molecules the development of delivery systems to target such novel drug molecules to cells or to target organs becomes increasingly important. Liposomes, in particular lipid-DNA complexes termed lipoplexes, compete successfully with viral gene transfection systems in this field of application. Future DDS will mostly be based on protein, peptide and DNA therapeutics and their next generation analogs and derivatives. Due to their versatility and vast body of known properties liposome-based formulations will continue to occupy a leading role among the large selection of emerging DDS.", "title": "" }, { "docid": "4041235ab6ad93290ed90cdf5e07d6e5", "text": "This article describes Apron, a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with ready-to-use numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.", "title": "" }, { "docid": "78e49f4e38dbafb51269fee46b8ace74", "text": "In this paper, we are concerned with image downsampling using subpixel techniques to achieve superior sharpness for small liquid crystal displays (LCDs). Such a problem exists when a high-resolution image or video is to be displayed on low-resolution display terminals. Limited by the low-resolution display, we have to shrink the image. Signal-processing theory tells us that optimal decimation requires low-pass filtering with a suitable cutoff frequency, followed by downsampling. In doing so, we need to remove many useful image details causing blurring. Subpixel-based downsampling, taking advantage of the fact that each pixel on a color LCD is actually composed of individual red, green, and blue subpixel stripes, can provide apparent higher resolution. In this paper, we use frequency-domain analysis to explain what happens in subpixel-based downsampling and why it is possible to achieve a higher apparent resolution. According to our frequency-domain analysis and observation, the cutoff frequency of the low-pass filter for subpixel-based decimation can be effectively extended beyond the Nyquist frequency using a novel antialiasing filter. Applying the proposed filters to two existing subpixel downsampling schemes called direct subpixel-based downsampling (DSD) and diagonal DSD (DDSD), we obtain two improved schemes, i.e., DSD based on frequency-domain analysis (DSD-FA) and DDSD based on frequency-domain analysis (DDSD-FA). Experimental results verify that the proposed DSD-FA and DDSD-FA can provide superior results, compared with existing subpixel or pixel-based downsampling methods.", "title": "" }, { "docid": "2bb988a1d2b3269e7ebe989a65f44487", "text": "The future connectivity landscape and, notably, the 5G wireless systems will feature Ultra-Reliable Low Latency Communication (URLLC). The coupling of high reliability and low latency requirements in URLLC use cases makes the wireless access design very challenging, in terms of both the protocol design and of the associated transmission techniques. This paper aims to provide a broad perspective on the fundamental tradeoffs in URLLC as well as the principles used in building access protocols. Two specific technologies are considered in the context of URLLC: massive MIMO and multi-connectivity, also termed interface diversity. The paper also touches upon the important question of the proper statistical methodology for designing and assessing extremely high reliability levels.", "title": "" }, { "docid": "ed66f39bda7ccd5c76f64543b5e3abd6", "text": "BACKGROUND\nLoeys-Dietz syndrome is a recently recognized multisystemic disorder caused by mutations in the genes encoding the transforming growth factor-beta receptor. It is characterized by aggressive aneurysm formation and vascular tortuosity. We report the musculoskeletal demographic, clinical, and imaging findings of this syndrome to aid in its diagnosis and treatment.\n\n\nMETHODS\nWe retrospectively analyzed the demographic, clinical, and imaging data of sixty-five patients with Loeys-Dietz syndrome seen at one institution from May 2007 through December 2008.\n\n\nRESULTS\nThe patients had a mean age of twenty-one years, and thirty-six of the sixty-five patients were less than eighteen years old. Previous diagnoses for these patients included Marfan syndrome (sixteen patients) and Ehlers-Danlos syndrome (two patients). Spinal and foot abnormalities were the most clinically important skeletal findings. Eleven patients had talipes equinovarus, and nineteen patients had cervical anomalies and instability. Thirty patients had scoliosis (mean Cobb angle [and standard deviation], 30 degrees +/- 18 degrees ). Two patients had spondylolisthesis, and twenty-two of thirty-three who had computed tomography scans had dural ectasia. Thirty-five patients had pectus excavatum, and eight had pectus carinatum. Combined thumb and wrist signs were present in approximately one-fourth of the patients. Acetabular protrusion was present in approximately one-third of the patients and was usually mild. Fourteen patients had previous orthopaedic procedures, including scoliosis surgery, cervical stabilization, clubfoot correction, and hip arthroplasty. Features of Loeys-Dietz syndrome that are important clues to aid in making this diagnosis include bifid broad uvulas, hypertelorism, substantial joint laxity, and translucent skin.\n\n\nCONCLUSIONS\nPatients with Loeys-Dietz syndrome commonly present to the orthopaedic surgeon with cervical malformations, spinal and foot deformities, and findings in the craniofacial and cutaneous systems.\n\n\nLEVEL OF EVIDENCE\nTherapeutic Level IV. See Instructions to Authors for a complete description of levels of evidence.", "title": "" }, { "docid": "1ca801ec3c0f5c0cbda2061ecd3cbfc0", "text": "One objective of the French-funded (ANR-2006-SECU-006) ISyCri Project (ISyCri stands for Interoperability of Systems in Crisis situation) is to provide the crisis cell in charge of the situation management with an information system (IS) able to support the interoperability of partners involved in this collaborative situation. Such a system is called Mediation Information System (MIS). This system must be in charge of (i) information exchange, (ii) services sharing and (iii) behavior orchestration. This paper presents the first step of the MIS engineering, the deduction of a collaborative process used to coordinate actors of the crisis cell. Especially, this paper give a formal definition of the deduction rules used to deduce the collaborative process.", "title": "" }, { "docid": "0ef117ca4663f523d791464dad9a7ebf", "text": "In this paper, a circularly polarized, omnidirectional side-fed bifilar helix antenna, which does not require a ground plane is presented. The antenna has a height of less than 0.1λ and the maximum boresight gain of 1.95dB, with 3dB beamwidth of 93°. The impedance bandwidth of the antenna for VSWR≤2 (with reference to resonant input resistance of 25Ω) is 2.7%. The simulated axial ratio(AR) at the resonant frequency 860MHz is 0.9 ≤AR≤ 1.0 in the whole hemisphere except small region around the nulls. The polarization bandwidth for AR≤3dB is 34.7%. The antenna is especially useful for high speed aerodynamic bodies made of composite materials (such as UAVs) where low profile antennas are essential to reduce air resistance and/or proper metallic ground is not available for monopole-type antenna.", "title": "" }, { "docid": "4db8ae1dc1f340a4c7d9fcd90fb576b7", "text": "Implementation of digital modulators on Field Programmable Gate Array (FPGA) is a research area that has received great attention recently. Most of the research has focused on the implementation of simple digital modulators on FPGAs such as Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK), and Phase Shift Keying (PSK). This paper presented a novel method of implementing Quadrature Phase Shift Keying (QPSK) along with Binary PSK (BPSK) using accumulators with a reverse addressing technique. The implementation of the BPSK modulator required two sinusoidal signals with 180-degree phase shift. The first signal was obtained using Look Up Table(LUT) based on Direct Digital Synthesizer (DDS) technique. The second signal was obtained by using the same LUT but after inverting the most significant bit of the accumulator to get the out of phase signal. For the QPSK modulator, four sinusoidal waves were needed. Using only one LUT, these waves were obtained. The first two wave were generated by using two accumulators working on the rising edge and the falling edge of a perfect twice frequency square wave clock which results in a 90-degree phase shift. The other waves were obtained from the same accumulators after reversing the most significant bit in each one. The implementation of the entire systems was done in the Very high speed integrated circuit Hardware Description Language (VHDL) without the help of Xilinx System Generator or DSP Builder tools as many papers did.", "title": "" }, { "docid": "9c89c4c4ae75f9b003fca6696163619a", "text": "We study a class of stochastic optimization models of expected utility in markets with stochastically changing investment opportunities. The prices of the primitive assets are modelled as diffusion processes whose coefficients evolve according to correlated diffusion factors. Under certain assumptions on the individual preferences, we are able to produce reduced form solutions. Employing a power transformation, we express the value function in terms of the solution of a linear parabolic equation, with the power exponent depending only on the coefficients of correlation and risk aversion. This reduction facilitates considerably the study of the value function and the characterization of the optimal hedging demand. The new results demonstrate an interesting connection with valuation techniques using stochastic differential utilities and also, with distorted measures in a dynamic setting.", "title": "" } ]
scidocsrr
8b0e74773d1ae0b88ca443d8f6a6e5d4
Motion Estimation from Image and Inertial Measurements Motion Estimation from Image and Inertial Measurements
[ { "docid": "0509fa7a08613b5a2383c53f40882e38", "text": "We present a demonstrated and commercially viable self-tracker, using robust software that fuses data from inertial and vision sensors. Compared to infrastructurebased trackers, self-trackers have the advantage that objects can be tracked over an extremely wide area, without the prohibitive cost of an extensive network of sensors or emitters to track them. So far most AR research has focused on the long-term goal of a purely vision-based tracker that can operate in arbitrary unprepared environments, even outdoors. We instead chose to start with artificial fiducials, in order to quickly develop the first self-tracker which is small enough to wear on a belt, low cost, easy to install and self-calibrate, and low enough latency to achieve AR registration. We also present a roadmap for how we plan to migrate from artificial fiducials to natural ones. By designing to the requirements of AR, our system can easily handle the less challenging applications of wearable VR systems and robot navigation.", "title": "" } ]
[ { "docid": "9ec8a4b8e052b352775b5f6fb98ff914", "text": "For most of the existing commercial driver assistance systems the use of a single environmental sensor and a tracking model tied to the characteristics of this sensor is sufficient. When using a multi-sensor fusion approach with heterogeneous sensors the information available for tracking depends on the sensors detecting the object. This paper describes an approach where multiple models are used for tracking moving objects. The best model for tracking is chosen based on the available sensor information. The architecture of the tracking system along with the tracking models and algorithm for model selection are presented. The design of the architecture and algorithms allows an extension of the system with new sensors and tracking models without changing existing software. The approach was implemented and successfully used in Tartan Racing’s autonomous vehicle for the Urban Grand Challenge. The advantages of the multisensor approach are explained and practical results of a representative scenario are presented.", "title": "" }, { "docid": "6e007d17153997a879467f30fa1d2aed", "text": "Since the proposal of big data analysis and Graphic Processing Unit (GPU), the deep learning technology has received a great deal of attention and has been widely applied in the field of imaging processing. In this paper, we have an aim to completely review and summarize the deep learning technologies for image denoising proposed in recent years. Morever, we systematically analyze the conventional machine learning methods for image denoising. Finally, we point out some research directions for the deep learning technologies in image denoising. abstract environment.", "title": "" }, { "docid": "3542433e5dd080af030b15c655fe4c6d", "text": "Life expectancy at birth has roughly tripled over the course of human history. Early gains were due to a general improvement in living standards and organized efforts to control the spread of infectious disease. Reductions in infant and child mortality in the late 19th and early 20th century led to a rapid increase in life expectancy at birth. Since 1970, the main factor driving continued gains in life expectancy in industrialized countries is a reduction in death rates among the elderly. In particular, death rates due to cardiovascular disease and cancer have declined in recent decades thanks to a variety of factors, including successful medical intervention. Based on available demographic evidence, the human life span shows no sign of approaching a fixed limit imposed by biology or other factors. Rather, both the average and the maximum human life span have increased steadily over time for more than a century. The complexity and historical stability of these changes suggest that the most reliable method of predicting the future is merely to extrapolate past trends. Such methods suggest that life expectancy at birth in industrialized countries will be about 85-87years at the middle of the 21st century.", "title": "" }, { "docid": "d95a204f5e931c9e5a1fff7dbfa3bc8c", "text": "Educational games have long been used in the classroom to add an immersive aspect to the curriculum. While the technology has a cadre of strong advocates, formal reviews have yielded mixed results. Two widely reported problems with educational games are poor production quality and monotonous game-play. On the other hand, commercial noneducational games exhibit both high production standards (good artwork, animation, and sound) and diversity of gameplay experience. Recently, educators have started to use commercial games in the classroom to overcome these obstacles. However, the use of these games is often limited since it is usually difficult to adapt them from their entertainment role. We describe how a commercial computer role-playing game (Neverwinter Nights) can be adapted by non-programmers, to produce a more enriching educational game-playing experience. This adaptation can be done by individual educators, groups of educators or by commercial enterprises. In addition, by using our approach, students can further adapt or augment the games they are playing to gain additional and deeper insights into the models and underlying abstractions of the subject domain they are learning about. This approach can be applied across a wide range of topics such as monetary systems in economics, the geography of a region, the culture of a population, or the sociology of a group or of interacting groups. EDUCATIONAL COMPUTER GAMES Educators are aware of the motivational power of simulation-based gaming and have diligently sought ways to exploit that power (Bowman, 1982; Malone & Lepper, 1987; Cordova & Lepper, 1996). Advocates of this approach have been captivated by the potential of creating immersive experiences (Stadsklev, 1974; Greenblat & Duke, 1975; Gee, 2003). The intent was to have students become existential player/participants operating within a virtual world with goals, resources and potential behaviors shaped by both the underlying model and the players’ experience and choices (Colella, Klopfer & Resnick, 2001; Collins & Ferguson, 1993; Rieber, 1996). Contemporary exponents of educational gaming/simulations have drawn their inspiration from modern video games (Gee, 2003). Like earlier proponents, they have been captivated by the ability of well designed gaming simulations to induce the immersive, \"in-the-groove\" experience that Csikszentmihalyi (1991) described as \"flow.\" They contend that the scaffolded learning principles employed in modern video games create the potential for participant experiences that are personally meaningful, socially rich, essentially experiential and highly epistemological (Bos, 2001; Gee, 2003; Halverson, 2003). Furthermore the design principles of successful video games provide a partial glimpse into possible future educational environments that incorporate what is commonly referred to as “just in time /need to know” learning (Prensky, 2001; Gee, 2005). Unfortunately, educational game producers have not had much success at producing the compelling, immersive environments of successful commercial games (Gee, 2003). “Most look like infomercials, showing low quality, poor editing, and low production costs.” (Squire & Jenkins, 2003, p11). Even relatively well received educational games such as Reader Rabbit, The Magic School Bus, Math Blaster, and States and Traits are little more than “electronic flashcards” that simply combine monotonous repetition with visual animations (Card, 1995; Squire & Jenkins, n.d.; Squire & Jenkins, 2003). Approaches to educational gaming/simulation can range from the instructivist in which students learn through playing games (Kafai, 1995) to the experimentalist in which students learn through exploring micro-worlds (Rieber, 1992, 1996) to the constructionist where students learn by building games (Papert & Harel, 1991). Advocates of the latter approach have been in the minority but the potential power of the game-building technologies and their potential as an alternative form of learning or expression is drawing increasing attention from the educational gaming community (Kafai, 2001; Robertson & Good, 2005). We have done some preliminary work with all three of these modes, with most of efforts focused on constructionist approaches (Carbonaro et al., 2005; Szafron et al., 2005). In this paper, we show how our constructivist approach can be adapted to create instructivist classroom materials. On the instructivist side, there are three basic approaches. First, simply use games that were created as educational games such as Reader Rabbit etc. and incur all of the problems manifested in this approach. Second, use commercial games, such as Civilization III (a historical simulation game) (Squire, 2005). However, it can be difficult for the educator to align a commercial game with specific educational topics or goals. Third, adapt a commercial game to meet specific educational goals. This is the approach we describe in this paper. We describe how the same gamebuilding tools we put into the hands of students can be used by educators to easily adapt commercial CPRGs to create instructivist classroom materials in the form of educational computer games.", "title": "" }, { "docid": "53b48550158b06dfbdb8c44a4f7241c6", "text": "The primary aim of the study was to examine the relationship between media exposure and body image in adolescent girls, with a particular focus on the ‘new’ and as yet unstudied medium of the Internet. A sample of 156 Australian female high school students (mean age= 14.9 years) completed questionnaire measures of media consumption and body image. Internet appearance exposure and magazine reading, but not television exposure, were found to be correlated with greater internalization of thin ideals, appearance comparison, weight dissatisfaction, and drive for thinness. Regression analyses indicated that the effects of magazines and Internet exposure were mediated by internalization and appearance comparison. It was concluded that the Internet represents a powerful sociocultural influence on young women’s lives.", "title": "" }, { "docid": "5c521a43b743144ed2df29fd7adf4aa3", "text": "We address the problem of geo-registering ground-based multi-view stereo models by ground-to-aerial image matching. The main contribution is a fully automated geo-registration pipeline with a novel viewpoint-dependent matching method that handles ground to aerial viewpoint variation. We conduct large-scale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms state-of-the-art techniques, yielding geo-registration at pixel-level accuracy.", "title": "" }, { "docid": "368c91e483429b54989efea3a80fb370", "text": "A large amount of land-use, environment, socio-economic, energy and transport data is generated in cities. An integrated perspective of managing and analysing such big data can answer a number of science, policy, planning, governance and business questions and support decision making in enabling a smarter environment. This paper presents a theoretical and experimental perspective on the smart cities focused big data management and analysis by proposing a cloud-based analytics service. A prototype has been designed and developed to demonstrate the effectiveness of the analytics service for big data analysis. The prototype has been implemented using Hadoop and Spark and the results are compared. The service analyses the Bristol Open data by identifying correlations between selected urban environment indicators. Experiments are performed using Hadoop and Spark and results are presented in this paper. The data pertaining to quality of life mainly crime and safety & economy and employment was analysed from the data catalogue to measure the indicators spread over years to assess positive and negative trends.", "title": "" }, { "docid": "c11c2f6ab06b8cad46dc79bfc70cd321", "text": "An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches on the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems that already incorporate known techniques such as dropout. Our ensemble model using different attention architectures yields a new state-of-the-art result in the WMT’15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.1", "title": "" }, { "docid": "7a72f69ad4926798e12f6fa8e598d206", "text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "title": "" }, { "docid": "2d4e89df9c3e54add8a9d54a963c9910", "text": "The tremendous amount of the data obtained from the study of complex biological systems changes our view on the pathogenesis of human diseases. Instead of looking at individual components of biological processes, we focus our attention more on the interaction and dynamics of biological systems. A network representation and analysis of the physiology and pathophysiology of biological systems is an effective way to study their complex behavior. Specific perturbations can trigger cascades of failures, which lead to the malfunctioning of cellular networks and as a result to the development of specific diseases. In this review we discuss recent developments in the field of disease network analysis and highlight some of the topics and views that we think are important for understanding network-based disease mechanisms.", "title": "" }, { "docid": "3b2faaddf8f530799b758178700d9cce", "text": "This research work presents a method for automatic classification of medical images in two classes Normal and Abnormal based on image features and automatic abnormality detection. Our proposed system consists of four phases Preprocessing, Feature extraction, Classification, and Post processing. Statistical texture feature set is derived from normal and abnormal images. We used the KNN classifier for classifying image. The KNN classifier performance compared with kernel based SVM classifier (Linear and RBF). The confusion matrix computed and result shows that KNN obtain 80% classification rate which is more than SVM classification rate. So we choose KNN algorithm for classification of images. If image classified as abnormal then post processing step applied on the image and abnormal region is highlighted on the image. The system has been tested on the number of real CT scan brain images.", "title": "" }, { "docid": "f5ef7795ec28c8de19bfde30a2499350", "text": "DevOps and continuous development are getting popular in the software industry. Adopting these modern approaches in regulatory environments, such as medical device software, is not straightforward because of the demand for regulatory compliance. While DevOps relies on continuous deployment and integration, regulated environments require strict audits and approvals before releases. Therefore, the use of modern development approaches in regulatory environments is rare, as is the research on the topic. However, as software is more and more predominant in medical devices, modern software development approaches become attractive. This paper discusses the fit of DevOps for regulated medical device software development. We examine two related standards, IEC 62304 and IEC 82304-1, for obstacles and benefits of using DevOps for medical device software development. We found these standards to set obstacles for continuous delivery and integration. Respectively, development tools can help fulfilling the requirements of traceability and documentation of these standards.", "title": "" }, { "docid": "1f6bf9c06b7ee774bc08848293b5c94a", "text": "The success of a virtual learning environment (VLE) depends to a considerable extent on student acceptance and use of such an e-learning system. After critically assessing models of technology adoption, including the Technology Acceptance Model (TAM), TAM2, and the Unified Theory of Acceptance and Usage of Technology (UTAUT), we build a conceptual model to explain the differences between individual students in the level of acceptance and use of a VLE. This model extends TAM2 and includes subjective norm, personal innovativeness in the domain of information technology, and computer anxiety. Data were collected from 45 Chinese participants in an Executive MBA program. After performing satisfactory reliability and validity checks, the structural model was tested with the use of PLS. Results indicate that perceived usefulness has a direct effect on VLE use. Perceived ease of use and subjective norm have only indirect effects via perceived usefulness. Both personal innovativeness and computer anxiety have direct effects on perceived ease of use only. Implications are that program managers in education should not only concern themselves with basic system design but also explicitly address individual differences between VLE users. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fb0875ee874dc0ada51d0097993e16c8", "text": "The literature on testing effects is vast but supports surprisingly few prescriptive conclusions for how to schedule practice to achieve both durable and efficient learning. Key limitations are that few studies have examined the effects of initial learning criterion or the effects of relearning, and no prior research has examined the combined effects of these 2 factors. Across 3 experiments, 533 students learned conceptual material via retrieval practice with restudy. Items were practiced until they were correctly recalled from 1 to 4 times during an initial learning session and were then practiced again to 1 correct recall in 1-5 subsequent relearning sessions (across experiments, more than 100,000 short-answer recall responses were collected and hand-scored). Durability was measured by cued recall and rate of relearning 1-4 months after practice, and efficiency was measured by total practice trials across sessions. A consistent qualitative pattern emerged: The effects of initial learning criterion and relearning were subadditive, such that the effects of initial learning criterion were strong prior to relearning but then diminished as relearning increased. Relearning had pronounced effects on long-term retention with a relatively minimal cost in terms of additional practice trials. On the basis of the overall patterns of durability and efficiency, our prescriptive conclusion for students is to practice recalling concepts to an initial criterion of 3 correct recalls and then to relearn them 3 times at widely spaced intervals.", "title": "" }, { "docid": "be7f7d9c6a28b7d15ec381570752de95", "text": "Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.", "title": "" }, { "docid": "28a69b2e02ca56c6ca867749b2129295", "text": "The popular view of software engineering focuses on managing teams of people to produce large systems. This paper addresses a different angle of software engineering, that of development for re-use and portability. We consider how an essential part of most software products - the user interface - can be successfully engineered so that it can be portable across multiple platforms and on multiple devices. Our research has identified the structure of the problem domain, and we have filled in some of the answers. We investigate promising solutions from the model-driven frameworks of the 1990s, to modern XML-based specification notations (Views, XUL, XIML, XAML), multi-platform toolkits (Qt and Gtk), and our new work, Mirrors which pioneers reflective libraries. The methodology on which Views and Mirrors is based enables existing GUI libraries to be transported to new operating systems. The paper also identifies cross-cutting challenges related to education, standardization and the impact of mobile and tangible devices on the future design of UIs. This paper seeks to position user interface construction as an important challenge in software engineering, worthy of ongoing research.", "title": "" }, { "docid": "32172b93cb6050c4a93b8323a56ad6b4", "text": "This work presents a novel method for automatic detection and identification of heart sounds. Homomorphic filtering is used to obtain a smooth envelogram of the phono cardiogram, which enables a robust detection of events of interest in heart sound signal. Sequences of features extracted from the detected events are used as observations of a hidden Markov model. It is demonstrated that the task of detection and identification of the major heart sounds can be learned from unlabelled phono cardiograms by an unsupervised training process and without the assistance of any additional synchronizing channels", "title": "" }, { "docid": "3afa5a7f1ed0a4a2064f7c2a1f4780c6", "text": "Earthing (grounding) refers to bringing the body in contact with the Earth. Health benefits were previously reported, but no study exists about mood. This study was conducted to assess if Earthing improves mood. 40 adult participants were either grounded or sham-grounded (no grounding) for 1 hr. while relaxing in a comfortable recliner chair equipped with a conductive pillow, mat, and patches connecting them to the ground. This pilot project was double-blinded and the Brief Mood Introspection Scale (comprising 4 mood scales) was used. Pleasant and positive moods statistically significantly improved among grounded-but not sham-grounded-participants. It is concluded that the 1-hr. contact with the Earth improved mood more than expected by relaxation alone. More extensive studies are, therefore, warranted.", "title": "" }, { "docid": "6417100ffc2dcd0936a850b7e05156e9", "text": "An optical sensing system has been developed using a pair of orthogonally placed position sensitive detectors (PSD) to track 3D displacement of a microsurgical instrument tip in real-time. An infrared (IR) diode is used to illuminate the workspace. A ball is attached to the tip of an intraocular shaft to reflect IR rays onto the PSDs. Instrument tip position is then calculated from the centroid positions of reflected IR light on the respective PSDs. The system can be used to assess the accuracy of hand-held microsurgical instruments and operator performance in micromanipulation tasks, such as microsurgeries. In order to eliminate inherent nonlinearity of the PSDs and lenses, calibration is performed using a feedforward neural network. After calibration, percentage RMS error is reduced from about 5.46 % to about 0.16%. The system RMS noise is about 0.7 mum. The sampling rate of the system is 250 Hz.", "title": "" }, { "docid": "b2aad34d91b5c38f794fc2577593798c", "text": "We present a model for pricing and hedging derivative securities and option portfolios in an environment where the volatility is not known precisely but is assumed instead to lie between two extreme values min and max These bounds could be inferred from extreme values of the implied volatilities of liquid options or from high low peaks in historical stock or option implied volatilities They can be viewed as de ning a con dence interval for future volatility values We show that the extremal non arbitrageable prices for the derivative asset which arise as the volatility paths vary in such a band can be described by a non linear PDE which we call the Black Scholes Barenblatt equation In this equation the pricing volatility is selected dynamically from the two extreme values min max according to the convexity of the value function A simple algorithm for solving the equation by nite di erencing or a trinomial tree is presented We show that this model captures the importance of diversi cation in managing derivatives positions It can be used systematically to construct e cient hedges using other derivatives in conjunction with the underlying asset y Courant Institute of Mathematical Sciences Mercer st New York NY Institute for Advanced Study Princeton NJ J P Morgan Securities New York NY The uncertain volatility model According to Arbitrage Pricing Theory if the market presents no arbitrage opportunities there exists a probability measure on future scenarios such that the price of any security is the expectation of its discounted cash ows Du e Such a probability is known as a mar tingale measure Harrison and Kreps or a pricing measure Determining the appropriate martingale measure associated with a sector of the security space e g the stock of a company and a riskless short term bond permits the valuation of any contingent claim based on these securities However pricing measures are often di cult to calculate precisely and there may exist more than one measure consistent with a given market It is useful to view the non uniqueness of pricing measures as re ecting the many choices for derivative asset prices that can exist in an uncertain economy For example option prices re ect the market s expectation about the future value of the underlying asset as well as its projection of future volatility Since this projection changes as the market reacts to new information implied volatility uctuates unpredictably In these circumstances fair option values and perfectly replicating hedges cannot be determined with certainty The existence of so called volatility risk in option trading is a concrete manifestation of market incompleteness This paper addresses the issue of derivative asset pricing and hedging in an uncertain future volatility environment For this purpose instead of choosing a pricing model that incorporates a complete view of the forward volatility as a single number or a predetermined function of time and price term structure of volatilities or even a stochastic process with given statistics we propose to operate under the less stringent assumption that that the volatility of future prices is restricted to lie in a bounded set but is otherwise undetermined For simplicity we restrict our discussion to derivative securities based on a single liquidly traded stock which pays no dividends over the contract s lifetime and assume a constant interest rate The basic assumption then reduces to postulating that under all admissible pricing mea sures future volatility paths will be restricted to lie within a band Accordingly we assume that the paths followed by future stock prices are It o processes viz dSt St t dZt t dt where t and t are non anticipative functions such that", "title": "" } ]
scidocsrr
84548fb3c2fd41eb9126b3332243ddb7
Exact Recovery of Hard Thresholding Pursuit
[ { "docid": "b0382aa0f8c8171b78dba1c179554450", "text": "This paper is concerned with the hard thresholding operator which sets all but the k largest absolute elements of a vector to zero. We establish a tight bound to quantitatively characterize the deviation of the thresholded solution from a given signal. Our theoretical result is universal in the sense that it holds for all choices of parameters, and the underlying analysis depends only on fundamental arguments in mathematical optimization. We discuss the implications for two domains: Compressed Sensing. On account of the crucial estimate, we bridge the connection between the restricted isometry property (RIP) and the sparsity parameter for a vast volume of hard thresholding based algorithms, which renders an improvement on the RIP condition especially when the true sparsity is unknown. This suggests that in essence, many more kinds of sensing matrices or fewer measurements are admissible for the data acquisition procedure. Machine Learning. In terms of large-scale machine learning, a significant yet challenging problem is learning accurate sparse models in an efficient manner. In stark contrast to prior work that attempted the `1-relaxation for promoting sparsity, we present a novel stochastic algorithm which performs hard thresholding in each iteration, hence ensuring such parsimonious solutions. Equipped with the developed bound, we prove the global linear convergence for a number of prevalent statistical models under mild assumptions, even though the problem turns out to be non-convex.", "title": "" } ]
[ { "docid": "ce64c8f2769957a5b93e0947c1987db5", "text": "Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce (1) feeder failure rankings, (2) cable, joint, terminator, and transformer rankings, (3) feeder Mean Time Between Failure (MTBF) estimates, and (4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or real-time, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City's electrical grid.", "title": "" }, { "docid": "17599a683c92e9ad0d112fb358e0d30a", "text": "Super-resolution algorithms reconstruct a high resolution image from a set of low resolution images of a scene. Precise alignment of the input images is an essential part of such algorithms. If the low resolution images are undersampled and have aliasing artifacts, the performance of standard registration algorithms decreases. We propose a frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part. A high resolution image is then reconstructed using cubic interpolation. Our algorithm is compared to other algorithms in simulations and practical experiments using real aliased images. Both show very good visual results and prove the attractivity of our approach in the case of aliased input images. A possible application is to digital cameras where a set of rapidly acquired images can be used to recover a higher resolution final image. Index Terms Super-resolution imaging, image registration, aliasing", "title": "" }, { "docid": "edbc09ea4ad9792abd9aa05176c17d42", "text": "The therapeutic nature of the nurse-patient relationship is grounded in an ethic of caring. Florence Nightingale envisioned nursing as an art and a science...a blending of humanistic, caring presence with evidence-based knowledge and exquisite skill. In this article, the author explores the caring practice of nursing as a framework for understanding moral accountability and integrity in practice. Being morally accountable and responsible for one's judgment and actions is central to the nurse's role as a moral agent. Nurses who practice with moral integrity possess a strong sense of themselves and act in ways consistent with what they understand is the right thing to do. A review of the literature related to caring theory, the concepts of moral accountability and integrity, and the documents that speak of these values and concepts in professional practice (eg, Code of Ethics for Nurses with Interpretive Statements, Nursing's Social Policy Statement) are presented in this article.", "title": "" }, { "docid": "2d2aca831aaf6b66c19b4ac9c2ae8ebb", "text": "Reinforcement Learning (RL) algorithms can suffer from poor sample efficiency when rewards are delayed and sparse. We introduce a solution that enables agents to learn temporally extended actions at multiple levels of abstraction in a sample efficient and automated fashion. Our approach combines universal value functions and hindsight learning, allowing agents to learn policies belonging to different time scales in parallel. We show that our method significantly accelerates learning in a variety of discrete and continuous tasks. A video illustrating our results is available at https://www.youtube.com/watch?v=jQ5FkDgTBLI.", "title": "" }, { "docid": "ddd275168d4e066df5e5937790a93986", "text": " The Jyros (JR) and the Advancing The Standard (ATS) valves were compared with the St. Jude Medical (SJM) valve in the mitral position to study the effects of design differences, installed valve orientation to the flow, and closing sounds using particle tracking velocimetry and particle image velocimetry methods utilizing a high-speed video flow visualization technique to map the velocity field. Sound measurements were made to confirm the claims of the manufacturers. Based on the experimental data, the following general conclusions can be made: On the vertical measuring plane which passes through the centers of the aortic and the mitral valves, the SJM valve shows a distinct circulatory flow pattern when the valve is installed in the antianatomical orientation; the SJM valve maintains the flow through the central orifice quite well; the newer curved leaflet JR valve and the ATS valve, which does not fully open during the peak flow phase, generates a higher but divergent flow close to the valve location when the valve was installed anatomically. The antianatomically installed JR valve showed diverse and less distinctive flow patterns and slower velocity on the central measuring plane than the SJM valve did, with noticeably lower valve closing noise. On the velocity field directly below the mitral valve that is normal to the previous measuring plane, the three valves show symmetrical twin circulations due to the divergent nature of the flow generated by the two inclined half discs; the SJM valve with centrally downward circulation is contrasted by the two other valves with peripherally downward circulation. These differences may have an important role in generation of the valve closing sound.", "title": "" }, { "docid": "2fb2c7b5bec56d59c453d6781a80f7bf", "text": "Automatic generation of natural language description for individual images (a.k.a. image captioning) has attracted extensive research attention. In this paper, we take one step further to investigate the generation of a paragraph to describe a photo stream for the purpose of storytelling. This task is even more challenging than individual image description due to the difficulty in modeling the large visual variance in an ordered photo collection and in preserving the long-term language coherence among multiple sentences. To deal with these challenges, we formulate the task as a sequence-to-sequence learning problem and propose a novel joint learning model by leveraging the semantic coherence in a photo stream. Specifically, to reduce visual variance, we learn a semantic space by jointly embedding each photo with its corresponding contextual sentence, so that the semantically related photos and their correlations are discovered. Then, to preserve language coherence in the paragraph, we learn a novel Bidirectional Attention-based Recurrent Neural Network (BARNN) model, which can attend on the discovered semantic relation to produce a sentence sequence and maintain its consistence with the photo stream. We integrate the two-step learning components into one single optimization formulation and train the network in an end-to-end manner. Experiments on three widely-used datasets (NYC/Disney/SIND) show that the proposed approach outperforms state-of-the-art methods with large margins for both retrieval and paragraph generation tasks. We also show the subjective preference of the machinegenerated stories by the proposed approach over the baselines through a user study with 40 human subjects.", "title": "" }, { "docid": "c116aab75223001bb4d216501b3c3b39", "text": "OBJECTIVE\nBurnout, a psychological consequence of prolonged work stress, has been shown to coexist with physical and mental disorders. The aim of this study was to investigate whether burnout is related to all-cause mortality among employees.\n\n\nMETHODS\nIn 1996, of 15,466 Finnish forest industry employees, 9705 participated in the 'Still Working' study and 8371 were subsequently identified from the National Population Register. Those who had been treated in a hospital for the most common causes of death prior to the assessment of burnout were excluded on the basis of the Hospital Discharge Register, resulting in a final study population of 7396 people. Burnout was measured using the Maslach Burnout Inventory-General Survey. Dates of death from 1996 to 2006 were extracted from the National Mortality Register. Mortality was predicted with Cox hazard regression models, controlling for baseline sociodemographic factors and register-based health status according to entitled medical reimbursement and prescribed medication for mental health problems, cardiac risk factors, and pain problems.\n\n\nRESULTS\nDuring the 10-year 10-month follow-up, a total of 199 employees had died. The risk of mortality per one-unit increase in burnout was 35% higher (95% CI 1.07-1.71) for total score and 26% higher (0.99-1.60) for exhaustion, 29% higher for cynicism (1.03-1.62), and 22% higher for diminished professional efficacy (0.96-1.55) in participants who had been under 45 at baseline. After adjustments, only the associations regarding burnout and exhaustion were statistically significant. Burnout was not related to mortality among the older employees.\n\n\nCONCLUSION\nBurnout, especially work-related exhaustion, may be a risk for overall survival.", "title": "" }, { "docid": "fc50b185323c45e3d562d24835e99803", "text": "The neuropeptide calcitonin gene-related peptide (CGRP) is implicated in the underlying pathology of migraine by promoting the development of a sensitized state of primary and secondary nociceptive neurons. The ability of CGRP to initiate and maintain peripheral and central sensitization is mediated by modulation of neuronal, glial, and immune cells in the trigeminal nociceptive signaling pathway. There is accumulating evidence to support a key role of CGRP in promoting cross excitation within the trigeminal ganglion that may help to explain the high co-morbidity of migraine with rhinosinusitis and temporomandibular joint disorder. In addition, there is emerging evidence that CGRP facilitates and sustains a hyperresponsive neuronal state in migraineurs mediated by reported risk factors such as stress and anxiety. In this review, the significant role of CGRP as a modulator of the trigeminal system will be discussed to provide a better understanding of the underlying pathology associated with the migraine phenotype.", "title": "" }, { "docid": "7618fa5b704c892b6b122f3602893d75", "text": "At the dawn of the second automotive century it is apparent that the competitive realm of the automotive industry is shifting away from traditional classifications based on firms’ production systems or geographical homes. Companies across the regional and volume spectrum have adopted a portfolio of manufacturing concepts derived from both mass and lean production paradigms, and the recent wave of consolidation means that regional comparisons can no longer be made without considering the complexities induced by the diverse ownership structure and plethora of international collaborations. In this chapter we review these dynamics and propose a double helix model illustrating how the basis of competition has shifted from cost-leadership during the heyday of Ford’s original mass production, to variety and choice following Sloan’s portfolio strategy, to diversification through leadership in design, technology or manufacturing excellence, as in the case of Toyota, and to mass customisation, which marks the current competitive frontier. We will explore how the production paradigms that have determined much of the competition in the first automotive century have evolved, what trends shape the industry today, and what it will take to succeed in the automotive industry of the future. 1 This chapter provides a summary of research conducted as part of the ILIPT Integrated Project and the MIT International Motor Vehicle Program (IMVP), and expands on earlier works, including the book The second century: reconnecting customer and value chain through build-toorder (Holweg and Pil 2004) and the paper Beyond mass and lean production: on the dynamics of competition in the automotive industry (Économies et Sociétés: Série K: Économie de l’Enterprise, 2005, 15:245–270).", "title": "" }, { "docid": "a9dd71d336baa0ea78ceb0435be67f67", "text": "In current credit ratings models, various accounting-based information are usually selected as prediction variables, based on historical information rather than the market’s assessment for future. In the study, we propose credit rating prediction model using market-based information as a predictive variable. In the proposed method, Moody’s KMV (KMV) is employed as a tool to evaluate the market-based information of each corporation. To verify the proposed method, using the hybrid model, which combine random forests (RF) and rough set theory (RST) to extract useful information for credit rating. The results show that market-based information does provide valuable information in credit rating predictions. Moreover, the proposed approach provides better classification results and generates meaningful rules for credit ratings. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6b7d2d82bbfbaa7f55c25b4a304c8d4c", "text": "Services that are delivered over the Internet—e-services— pose unique problems yet offer unprecedented opportunities. In this paper, we classify e-services along the dimensions of their level of digitization and the nature of their target markets (business-to-business, business-toconsumer, consumer-to-consumer). Using the case of application services, we analyze how they differ from traditional software procurement and development. Next, we extend the concept of modular platforms to this domain and identify how knowledge management can be used to assemble rapidly new application services. We also discuss how such traceabilty-based knowledge management can facilitate e-service evolution and version-based market segmentation.", "title": "" }, { "docid": "7190e8e6f6c061bed8589719b7d59e0d", "text": "Image-level feature descriptors obtained from convolutional neural networks have shown powerful representation capabilities for image retrieval. In this paper, we present an unsupervised method to aggregate deep convolutional features into compact yet discriminative image vectors by simulating the dynamics of heat diffusion. A distinctive problem in image retrieval is that repetitive or bursty features tend to dominate feature representations, leading to less than ideal matches. We show that by considering each deep feature as a heat source, our unsupervised aggregation method is able to avoiding over-representation of bursty features. We additionally provide a practical solution for the proposed aggregation method, and further show the efficiency of our method in experimental evaluation. Finally, we extensively evaluate the proposed approach with pre-trained and fine-tuned deep networks on common public benchmarks, and show superior performance compared to previous work. Image retrieval has always been an attractive research topic in the field of computer vision. By allowing users to search similar images from a large database of digital images, it provides a natural and flexible interface for image archiving and browsing. Convolutional Neural Networks (CNNs) have shown remarkable accuracy in tasks such as image classification, and object detection. Recent research has also shown positive results of using CNNs on image retrieval (Babenko and Lempitsky 2015; Kalantidis, Mellina, and Osindero 2016; Hoang et al. 2017). However, unlike image classification approaches which often use global feature vectors produced by fully connected layers, these methods extract local features depicting image patches from the outputs of convolutional layers and aggregate these features into compact (a few hundred dimensions) image-level descriptors. Once meaningful and representative image-level descriptors are defined, visually similar images are retrieved by computing similarities between pre-computed database feature representations and query representations. In this paper we devise a method to avoid overrepresenting bursty features. Inspired by an observation of similar phenomena in textual data, Jegou et al. (Jégou, Douze, and Schmid 2009) identified burstiness as the phenomenon by which overly repetitive features within an instance tend to dominate the instance feature representation. In order to alleviate this issue, we propose a feature aggregation approach that emulates the dynamics of heat diffusion. The idea is to model feature maps as a heat system where we weight highly the features leading to low system temperatures. This is because that these features are less connected to other features, and therefore they are more distinctive. The dynamics of the temperature in such system can be estimated using the partial differential equation induced by the heat equation. Heat diffusion, and more specifically anisotropic diffusion, has been used successfully in various image processing and computer vision tasks. Ranging from the classical work of Perona and Malik (Perona and Malik 1990) to further applications in image smoothing, image regularization, image co-segmentation, and optical flow estimation (Zhang, Zheng, and Cai 2010; Tschumperle and Deriche 2005; Kim et al. 2011; Bruhn, Weickert, and Schnörr 2005). However, to our knowledge, it has not been applied to weight features from the outputs of a deep convolutional neural network. We show that by combining this classical image processing technique with a deep learning model, we are able to obtain significant gains against previous work. Our contributions can be summarized as follows: • By greedily considering each deep feature as a heat source and enforcing the temperature of the system be a constant within each heat source, we propose a novel efficient feature weighting approach to reduce the undesirable influence of bursty features. • We provide a practical solution to computing weights for our feature weighting method. Additionally, we conduct extensive quantitative evaluations on commonly used image retrieval benchmarks, and demonstrate substantial performance improvement over existing unsupervised methods for feature aggregation.", "title": "" }, { "docid": "f70447a47fb31fc94d6b57ca3ef57ad3", "text": "BACKGROUND\nOn Aug 14, 2014, the US Food and Drug Administration approved the antiangiogenesis drug bevacizumab for women with advanced cervical cancer on the basis of improved overall survival (OS) after the second interim analysis (in 2012) of 271 deaths in the Gynecologic Oncology Group (GOG) 240 trial. In this study, we report the prespecified final analysis of the primary objectives, OS and adverse events.\n\n\nMETHODS\nIn this randomised, controlled, open-label, phase 3 trial, we recruited patients with metastatic, persistent, or recurrent cervical carcinoma from 81 centres in the USA, Canada, and Spain. Inclusion criteria included a GOG performance status score of 0 or 1; adequate renal, hepatic, and bone marrow function; adequately anticoagulated thromboembolism; a urine protein to creatinine ratio of less than 1; and measurable disease. Patients who had received chemotherapy for recurrence and those with non-healing wounds or active bleeding conditions were ineligible. We randomly allocated patients 1:1:1:1 (blocking used; block size of four) to intravenous chemotherapy of either cisplatin (50 mg/m2 on day 1 or 2) plus paclitaxel (135 mg/m2 or 175 mg/m2 on day 1) or topotecan (0·75 mg/m2 on days 1-3) plus paclitaxel (175 mg/m2 on day 1) with or without intravenous bevacizumab (15 mg/kg on day 1) in 21 day cycles until disease progression, unacceptable toxic effects, voluntary withdrawal by the patient, or complete response. We stratified randomisation by GOG performance status (0 vs 1), previous radiosensitising platinum-based chemotherapy, and disease status (recurrent or persistent vs metastatic). We gave treatment open label. Primary outcomes were OS (analysed in the intention-to-treat population) and adverse events (analysed in all patients who received treatment and submitted adverse event information), assessed at the second interim and final analysis by the masked Data and Safety Monitoring Board. The cutoff for final analysis was 450 patients with 346 deaths. This trial is registered with ClinicalTrials.gov, number NCT00803062.\n\n\nFINDINGS\nBetween April 6, 2009, and Jan 3, 2012, we enrolled 452 patients (225 [50%] in the two chemotherapy-alone groups and 227 [50%] in the two chemotherapy plus bevacizumab groups). By March 7, 2014, 348 deaths had occurred, meeting the prespecified cutoff for final analysis. The chemotherapy plus bevacizumab groups continued to show significant improvement in OS compared with the chemotherapy-alone groups: 16·8 months in the chemotherapy plus bevacizumab groups versus 13·3 months in the chemotherapy-alone groups (hazard ratio 0·77 [95% CI 0·62-0·95]; p=0·007). Final OS among patients not receiving previous pelvic radiotherapy was 24·5 months versus 16·8 months (0·64 [0·37-1·10]; p=0·11). Postprogression OS was not significantly different between the chemotherapy plus bevacizumab groups (8·4 months) and chemotherapy-alone groups (7·1 months; 0·83 [0·66-1·05]; p=0·06). Fistula (any grade) occurred in 32 (15%) of 220 patients in the chemotherapy plus bevacizumab groups (all previously irradiated) versus three (1%) of 220 in the chemotherapy-alone groups (all previously irradiated). Grade 3 fistula developed in 13 (6%) versus one (<1%). No fistulas resulted in surgical emergencies, sepsis, or death.\n\n\nINTERPRETATION\nThe benefit conferred by incorporation of bevacizumab is sustained with extended follow-up as evidenced by the overall survival curves remaining separated. After progression while receiving bevacizumab, we did not observe a negative rebound effect (ie, shorter survival after bevacizumab is stopped than after chemotherapy alone is stopped). These findings represent proof-of-concept of the efficacy and tolerability of antiangiogenesis therapy in advanced cervical cancer.\n\n\nFUNDING\nNational Cancer Institute.", "title": "" }, { "docid": "eebf03df49eb4a99f61d371e059ef43e", "text": "In theoretical cognitive science, there is a tension between highly structured models whose parameters have a direct psychological interpretation and highly complex, general-purpose models whose parameters and representations are difficult to interpret. The former typically provide more insight into cognition but the latter often perform better. This tension has recently surfaced in the realm of educational data mining, where a deep learning approach to estimating student proficiency, termed deep knowledge tracing or DKT [17], has demonstrated a stunning performance advantage over the mainstay of the field, Bayesian knowledge tracing or BKT [3].", "title": "" }, { "docid": "bba0687091acf218d9039c87cd08c01c", "text": "Our project had two main objectives. First, we wanted to use historical tennis match data to predict the outcomes of future tennis matches. Next, we wanted to use the predictions from our resulting model to beat the current betting odds. After setting up our prediction and betting models, we were able to accurately predict the outcome of 69.6% of the 2016 and 2017 tennis season, and turn a 3.3% profit per match.", "title": "" }, { "docid": "bac7f4109f023ee2df039f340dbaefb1", "text": "In many important ext classification problems, acquiring class labels for training documents is costly, while gathering large quantities of unlabeled ata is cheap. This paper shows that the accuracy of text classifiers trained with a small number of labeled documents can be improved by augmenting this small training set with a large pool of unlabeled documents. We present a theoretical argument showing that, under common assumptions, unlabeled data contain information about the target function. We then introduce an algorithm for learning from labeled and unlabeled text based on the combination of Expectation-Maximization with a naive Bayes classifier. The algorithm first trains a classifter using the available labeled documents, and probabilistically labels the unlabeled documents; it then trains a new classifier using the labels for all the documents, and iterates to convergence. Experimental results, obtained using text from three different realworld tasks, show that the use of unlabeled data reduces classification error by up to 33%.", "title": "" }, { "docid": "c1fc1a31d9f5033a7469796d1222aef3", "text": "Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.", "title": "" }, { "docid": "c206399c6ebf96f3de3aa5fdb10db49d", "text": "Canine monocytotropic ehrlichiosis (CME), caused by the rickettsia Ehrlichia canis, an important canine disease with a worldwide distribution. Diagnosis of the disease can be challenging due to its different phases and multiple clinical manifestations. CME should be suspected when a compatible history (living in or traveling to an endemic region, previous tick exposure), typical clinical signs and characteristic hematological and biochemical abnormalities are present. Traditional diagnostic techniques including hematology, cytology, serology and isolation are valuable diagnostic tools for CME, however a definitive diagnosis of E. canis infection requires molecular techniques. This article reviews the current literature covering the diagnosis of infection caused by E. canis.", "title": "" }, { "docid": "564591c62475a2f9ec1eafb8ce95ae32", "text": "IT companies worldwide have started to improve their service management processes based on best practice frameworks, such as IT Infrastructure Library (ITIL). However, many of these companies face difficulties in demonstrating the positive outcomes of IT service management (ITSM) process improvement. This has led us to investigate the research problem: What positive impacts have resulted from IT service management process improvement? The main contributions of this paper are 1) to identify the ITSM process improvement outcomes in two IT service provider organizations and 2) provide advice as lessons learnt.", "title": "" }, { "docid": "8979ac412e25cf842611dcb257836cea", "text": "Tensors or <italic>multiway arrays</italic> are functions of three or more indices <inline-formula> <tex-math notation=\"LaTeX\">$(i,j,k,\\ldots)$</tex-math></inline-formula>—similar to matrices (two-way arrays), which are functions of two indices <inline-formula><tex-math notation=\"LaTeX\">$(r,c)$</tex-math></inline-formula> for (row, column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining, and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth <italic>and depth</italic> that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.", "title": "" } ]
scidocsrr
f9a7efbed11dfad3a174b2695f861ad7
Boosting and Differential Privacy
[ { "docid": "b6fa1ee8c2f07b34768a78591c33bbbe", "text": "We prove that there are arbitrarily long arithmetic progressions of primes. There are three major ingredients. [. . . ] [. . . ] for all x ∈ ZN (here (m0, t0, L0) = (3, 2, 1)) and E ( ν((x− y)/2)ν((x− y + h2)/2)ν(−y)ν(−y − h1)× × ν((x− y′)/2)ν((x− y′ + h2)/2)ν(−y)ν(−y − h1)× × ν(x)ν(x + h1)ν(x + h2)ν(x + h1 + h2) ∣∣∣∣ x, h1, h2, y, y′ ∈ ZN) = 1 + o(1) (0.1) (here (m0, t0, L0) = (12, 5, 2)). [. . . ] Proposition 0.1 (Generalised von Neumann). Suppose that ν is k-pseudorandom. Let f0, . . . , fk−1 ∈ L(ZN) be functions which are pointwise bounded by ν+νconst, or in other words |fj(x)| 6 ν(x) + 1 for all x ∈ ZN , 0 6 j 6 k − 1. (0.2) Let c0, . . . , ck−1 be a permutation of {0, 1, . . . , k − 1} (in practice we will take cj := j). Then E ( k−1 ∏ j=0 fj(x + cjr) ∣∣∣∣ x, r ∈ ZN) = O( inf 06j6k−1 ‖fj‖Uk−1) + o(1).", "title": "" }, { "docid": "1d9004c4115c314f49fb7d2f44aaa598", "text": "We show by means of several examples that robust statistical estimators present an excellent starting point for differentially private estimators. Our algorithms use a new paradigm for differentially private mechanisms, which we call Propose-Test-Release (PTR), and for which we give a formal definition and general composition theorems.", "title": "" } ]
[ { "docid": "bc49930fa967b93ed1e39b3a45237652", "text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).", "title": "" }, { "docid": "21384ea8d80efbf2440fb09a61b03be2", "text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.", "title": "" }, { "docid": "022a18f7fe530372720c6cb9daf64b94", "text": "Deep neural networks are currently among the most commonly used classifiers. Despite easily achieving very good performance, one of the best selling points of these models is their modular design – one can conveniently adapt their architecture to specific needs, change connectivity patterns, attach specialised layers, experiment with a large amount of activation functions, normalisation schemes and many others. While one can find impressively wide spread of various configurations of almost every aspect of the deep nets, one element is, in authors’ opinion, underrepresented – while solving classification problems, vast majority of papers and applications simply use log loss. In this paper we try to investigate how particular choices of loss functions affect deep models and their learning dynamics, as well as resulting classifiers robustness to various effects. We perform experiments on classical datasets, as well as provide some additional, theoretical insights into the problem. In particular we show that L1 and L2 losses are, quite surprisingly, justified classification objectives for deep nets, by providing probabilistic interpretation in terms of expected misclassification. We also introduce two losses which are not typically used as deep nets objectives and show that they are viable alternatives to the existing ones.", "title": "" }, { "docid": "f3abf5a6c20b6fff4970e1e63c0e836b", "text": "We demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.", "title": "" }, { "docid": "5c74d0cfcbeaebc29cdb58a30436556a", "text": "Modular decomposition is an effective means to achieve a complex system, but that of current part-component-based does not meet the needs of the positive development of the production. Design Structure Matrix (DSM) can simultaneously reflect the sequence, iteration, and feedback information, and express the parallel, sequential, and coupled relationship between DSM elements. This article, a modular decomposition method, named Design Structure Matrix Clustering modularize method, is proposed, concerned procedures are define, based on sorting calculate and clustering analysis of DSM, according to the rules of rows exchanges and columns exchange with the same serial number. The purpose and effectiveness of DSM clustering modularize method are confirmed through case study of assembly and calibration system for the large equipment.", "title": "" }, { "docid": "8e7cfad4f1709101e5790343200d1e16", "text": "Although electronic commerce experts often cite privacy concerns as barriers to consumer electronic commerce, there is a lack of understanding about how these privacy concerns impact consumers' willingness to conduct transactions online. Therefore, the goal of this study is to extend previous models of e-commerce adoption by specifically assessing the impact that consumers' concerns for information privacy (CFIP) have on their willingness to engage in online transactions. To investigate this, we conducted surveys focusing on consumers’ willingness to transact with a well-known and less well-known Web merchant. Results of the study indicate that concern for information privacy affects risk perceptions, trust, and willingness to transact for a wellknown merchant, but not for a less well-known merchant. In addition, the results indicate that merchant familiarity does not moderate the relationship between CFIP and risk perceptions or CFIP and trust. Implications for researchers and practitioners are discussed. 1 Elena Karahanna was the accepting senior editor. Kathy Stewart Schwaig and David Gefen were the reviewers. This paper was submitted on October 12, 2004, and went through 4 revisions. Information Privacy and Online Consumer Purchasing/Van Slyke et al. Journal of the Association for Information Systems Vol. 7 No. 6, pp. 415-444/June 2006 416 Introduction Although information privacy concerns have long been cited as barriers to consumer adoption of business-to-consumer (B2C) e-commerce (Hoffman et al., 1999, Sullivan, 2005), the results of studies focusing on privacy concerns have been equivocal. Some studies find that mechanisms intended to communicate information about privacy protection such as privacy seals and policies increase intentions to engage in online transactions (Miyazaki and Krishnamurthy, 2002). In contrast, others find that these mechanisms have no effect on consumer willingness to engage in online transactions (Kimery and McCord, 2002). Understanding how consumers’ concerns for information privacy (CFIP), or their concerns about how organizations use and protect personal information (Smith et al., 1996), impact consumers’ willingness to engage in online transactions is important to our knowledge of consumer-oriented e-commerce. For example, if CFIP has a strong direct impact on willingness to engage in online transactions, both researchers and practitioners may want to direct efforts at understanding how to allay some of these concerns. In contrast, if CFIP only impacts willingness to transact through other factors, then efforts may be directed at influencing these factors through both CFIP as well as through their additional antecedents. Prior research on B2C e-commerce examining consumer willingness to transact has focused primarily on the role of trust and trustworthiness either using trust theory or using acceptance, and adoption-based theories as frameworks from which to study trust. The research based on trust theories tends to focus on the structure of trust or on antecedents to trust (Bhattacherjee, 2002; Gefen, 2000; Jarvenpaa et al., 2000; McKnight et al., 2002a). Adoptionand acceptance-based research includes studies using the Technology Acceptance Model (Gefen et al., 2003) and diffusion theory (Van Slyke et al., 2004) to examine the effects of trust within well-established models. To our knowledge, studies of the effects of trust in the context of e-commerce transactions have not included CFIP as an antecedent in their models. The current research addresses this by examining the effect of CFIP on willingness to transact within a nomological network of additional antecedents (i.e., trust and risk) that we expect will be influenced by CFIP. In addition, familiarity with the Web merchant may moderate the relationship between CFIP and both trust and risk perceptions. As an individual becomes more familiar with the Web merchant and how it collects and protects personal information, perceptions may be driven more by knowledge of the merchant than by information concerns. This differential relationship between factors for more familiar (e.g. experienced) and less familiar merchants is similar to findings of previous research on user acceptance for potential and repeat users of technology (Karahanna et al., 1999) and e-commerce customers (Gefen et al., 2003). Thus, this research has two goals. The first goal is to better understand the role that consumers’ concerns for information privacy (CFIP) have on their willingness to engage in online transactions. The second goal is to investigate whether familiarity moderates the effects of CFIP on key constructs in our nomological network. Specifically, the following research questions are investigated: How do consumers’ concerns for information privacy affect their willingness to engage in online transactions? Does consumers' familiarity with a Web merchant moderate the impact of concern for information privacy on risk and on trust? Information Privacy and Online Consumer Purchasing/Van Slyke et al. Journal of the Association for Information Systems Vol. 7 No. 6, pp. 415-444/June 2006 417 This paper is organized as follows. First, we provide background information regarding the existing literature and the constructs of interest. Next, we present our research model and develop the hypotheses arising from the model. We then describe the method by which we investigated the hypotheses. This is followed by a discussion of the results of our analysis. We conclude the paper by discussing the implications and limitations of our work, along with suggestions for future research. Research Model and Hypotheses Figure 1 presents this study's research model. Given that concern for information privacy is the central focus of the study, we embed the construct within a nomological network of willingness to transact in prior research. Specifically, we include risk, familiarity with the merchant, and trust (Bhattacherjee, 2002; Gefen et al., 2003; Jarvenpaa and Tractinsky, 1999; Van Slyke et al., 2004) constructs that CFIP is posited to influence and that have been found to influence. We first discuss CFIP and then present the theoretical rationale that underlies the relationships presented in the research model. We begin our discussion of the research model by providing an overview of CFIP, focusing on this construct in the context of e-commerce.", "title": "" }, { "docid": "7401f7a0f82fa6384cd62eb4b77c1ea2", "text": "The advance of GPS-enabled devices allows people to record their location histories with GPS traces, which imply human behaviors and preferences related to travel. In this article, we perform two types of travel recommendations by mining multiple users' GPS traces. The first is a generic one that recommends a user with top interesting locations and travel sequences in a given geospatial region. The second is a personalized recommendation that provides an individual with locations matching her travel preferences. To achieve the first recommendation, we model multiple users' location histories with a tree-based hierarchical graph (TBHG). Based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based model to infer the interest level of a location and a user's travel experience (knowledge). In the personalized recommendation, we first understand the correlation between locations, and then incorporate this correlation into a collaborative filtering (CF)-based model, which predicts a user's interests in an unvisited location based on her locations histories and that of others. We evaluated our system based on a real-world GPS trace dataset collected by 107 users over a period of one year. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, we achieved a better performance in recommending travel sequences beyond baselines like rank-by-count. Regarding the personalized recommendation, our approach is more effective than the weighted Slope One algorithm with a slightly additional computation, and is more efficient than the Pearson correlation-based CF model with the similar effectiveness.", "title": "" }, { "docid": "2f8635d4da12fd6d161c7b10c140f8f9", "text": "Technology has made navigation in 3D real time possible and this has made possible what seemed impossible. This paper explores the aspect of deep visual odometry methods for mobile robots. Visual odometry has been instrumental in making this navigation successful. Noticeable challenges in mobile robots including the inability to attain Simultaneous Localization and Mapping have been solved by visual odometry through its cameras which are suitable for human environments. More intuitive, precise and accurate detection have been made possible by visual odometry in mobile robots. Another challenge in the mobile robot world is the 3D map reconstruction for exploration. A dense map in mobile robots can facilitate for localization and more accurate findings. I. VISUAL ODOMETRY IN MOBILE ROBOTS Mobile robot applications heavily rely on the ability of the vehicle to achieve accurate localization. It is essential that a robot is able to maintain knowledge about its position at all times in order to achieve autonomous navigation. To attain this, various techniques, systems and sensors have been established to aid with mobile robot positioning including visual odometry [1]. Importantly, the adoption of Deep Learning based techniques was inspired by the precision to find solutions to numerous standard computer vision problems including object detection, image classification and segmentation. Visual odometry involves the pose estimation process that involves a robot and how they use a stream of images obtained from cameras that are attached to them [2]. The main aim of visual odometry is the estimations from camera pose. It is an approach that avoids contact with the robot for the purpose of ensuring that the mobile robots are effectively positioned. For this reason, the process is quite a challenging task that is related to mapping and simultaneous localization whose main aim is to generate the road map from a stream of visual data [3]. Estimates of motion from pixel differences and features between frames are made based on cameras that are strategically positioned. For mobile robots to achieve an actively controlled navigation, a real time 3D and reliable localization and reconstruction of functions is an essential prerequisite [4]. Mobile robots have to perform localization and mapping functions simultaneously and this poses a major challenge for them. The Simultaneous Localization and Mapping (SLAM) problem has attracted attention as various studies extensively evaluate it [5]. To solve the SLAM problem, visual odometry has been suggested especially because cameras provide high quality information at a low cost from the sensors that are conducive for human environments [6]. The major advances in computer vision also make possible quite a number of synergistic capabilities including terrain and scene classification, object detection and recognition. Notably, the visual odometry in mobile robot have enabled for more precise, intuitive and accurate detection. Although there has been significant progress in the last decade to bring improvements to passive mobile robots into controllable robots that are active, there are still notable challenges in the effort to achieve this. Particularly, a 3D map reconstruction that is fully dense to facilitate for exploration still remains an unsolved problem. It is only through a dense map that mobile robots can be able to more reliably do localization and ultimately leading to findings that are more accurate [7] [8]. According to Turan ( [9]), it is essential that adoptions of a comprehensive reconstruction on the suitable 3D method for mobile robots be adopted. This can be made possible through the building of a modular fashion including key frame selection, pre-processing, estimates on sparse then dense alignment based pose, shading based 3D and bundle fusion reconstruction [10]. There is also the challenge of the real time precise localization of the mobile robots that are actively controlled. The study by [11], which employed quantitative and qualitative in trajectory estimations sought to find solution to the challenge of precise localization for the endoscopic robot capsule. The data set was general and this was ensured through the fitting of 3 endoscopic cameras in different locations for the purpose of capturing the endoscopic videos [12]. Stomach videos were recorded for 15 minutes and they contained more than 10,000 frames. Through this, the ground truth was served for the 3D reconstruction module maps’ quantitative evaluations [13]. Its findings proposed that the direct SLAM be implemented on a map fusion based method that is non rigid for the mobile robots [14]. Through this method, high accuracy is likely to be achieved for extensive evaluations and conclusions [15]. The industry of mobile robots continues to face numerous challenges majorly because of enabling technology, including perception, artificial intelligence and power sources [16]. Evidently, motors, actuators and gears are essential to the robotic world today. Work is still in progress in the development of soft robotics, artificial muscles and strategies of assembly that are aimed at developing the autonomous robot’s generation in the coming future that are power efficient and multifunctional. There is also the aspect of robots lacing synchrony, calibration and symmetry which serves to increase the photometric error. This challenge maybe addressed by adopting the direct odometry method [17]. Direct sparse odometry has been recommended by various studies since it has been found to reduce the photometric error. This can be associated to the fact that it combines a probabilistic model with joint optimization of model parameters [9]. It has also been found to maintain high levels of consistency especially because it incorporates geometry parameters which also increase accuracy levels [18].", "title": "" }, { "docid": "b729bb8bc6a9b8dd655b77a7bfc68846", "text": "BACKGROUND\nWe describe our experiences with vaginal vault resection for vaginal recurrence of cervical cancer after hysterectomy and radiotherapy. After operative treatment, the rate of vaginal vault recurrence of uterine cervical cancer is reported to be about 5%. There is no consensus regarding the treatment for these cases.\n\n\nMETHODS\nBetween 2004 and 2012, eight patients with vaginal vault recurrence underwent removal of the vaginal wall via laparotomy after hysterectomy and radiotherapy.\n\n\nRESULTS\nThe median patient age was 45 years (range 35 to 70 years). The median operation time was 244.5 min (range 172 to 590 min), the median estimated blood loss was 362.5 mL (range 49 to 1,890 mL), and the median duration of hospitalization was 24.5 days (range 11 to 50 days). Two patients had intraoperative complications: a grade 1 bowel injury and a grade 1 bladder injury. The following postoperative complications were observed: one patient had vaginal vault bleeding, three patients developed vesicovaginal fistulae, and one patient had repeated ileus. Two patients needed clean intermittent catheterization. Local control was achieved in five of the eight cases.\n\n\nCONCLUSIONS\nVaginal vault resection is an effective treatment for vaginal recurrence of cervical cancer after hysterectomy and radiotherapy. However, complications of this procedure can be expected to reduce quality of life. Therefore, this operation should be selected with great care.", "title": "" }, { "docid": "cc06553e4d03bf8541597d01de4d5eae", "text": "Several technologies are used today to improve safety in transportation systems. The development of a system for drivability based on both V2V and V2I communication is considered an important task for the future. V2X communication will be a next step for the transportation safety in the nearest time. A lot of different structures, architectures and communication technologies for V2I based systems are under development. Recently a global paradigm shift known as the Internet-of-Things (IoT) appeared and its integration with V2I communication could increase the safety of future transportation systems. This paper brushes up on the state-of-the-art of systems based on V2X communications and proposes an approach for system architecture design of a safe intelligent driver assistant system using IoT communication. In particular, the paper presents the design process of the system architecture using IDEF modeling methodology and data flows investigations. The proposed approach shows the system design based on IoT architecture reference model.", "title": "" }, { "docid": "8d1797caf78004e6ba548ace7d5a1161", "text": "An automated irrigation system was developed to optimize water use for agricultural crops. The system has a distributed wireless network of soil-moisture and temperature sensors placed in the root zone of the plants. In addition, a gateway unit handles sensor information, triggers actuators, and transmits data to a web application. An algorithm was developed with threshold values of temperature and soil moisture that was programmed into a microcontroller-based gateway to control water quantity. The system was powered by photovoltaic panels and had a duplex communication link based on a cellular-Internet interface that allowed for data inspection and irrigation scheduling to be programmed through a web page. The automated system was tested in a sage crop field for 136 days and water savings of up to 90% compared with traditional irrigation practices of the agricultural zone were achieved. Three replicas of the automated system have been used successfully in other places for 18 months. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.", "title": "" }, { "docid": "3613dd18a4c930a28ed520192f7ac23f", "text": "OBJECTIVES\nIn this paper we present a contemporary understanding of \"nursing informatics\" and relate it to applications in three specific contexts, hospitals, community health, and home dwelling, to illustrate achievements that contribute to the overall schema of health informatics.\n\n\nMETHODS\nWe identified literature through database searches in MEDLINE, EMBASE, CINAHL, and the Cochrane Library. Database searching was complemented by one author search and hand searches in six relevant journals. The literature review helped in conceptual clarification and elaborate on use that are supported by applications in different settings.\n\n\nRESULTS\nConceptual clarification of nursing data, information and knowledge has been expanded to include wisdom. Information systems and support for nursing practice benefits from conceptual clarification of nursing data, information, knowledge, and wisdom. We introduce three examples of information systems and point out core issues for information integration and practice development.\n\n\nCONCLUSIONS\nExploring interplays of data, information, knowledge, and wisdom, nursing informatics takes a practice turn, accommodating to processes of application design and deployment for purposeful use by nurses in different settings. Collaborative efforts will be key to further achievements that support task shifting, mobility, and ubiquitous health care.", "title": "" }, { "docid": "edb92440895801051e0bf63ade2cfbf8", "text": "Over the last three decades, dietary pattern analysis has come to the forefront of nutritional epidemiology, where the combined effects of total diet on health can be examined. Two analytical approaches are commonly used: a priori and a posteriori. Cluster analysis is a commonly used a posteriori approach, where dietary patterns are derived based on differences in mean dietary intake separating individuals into mutually exclusive, non-overlapping groups. This review examines the literature on dietary patterns derived by cluster analysis in adult population groups, focusing, in particular, on methodological considerations, reproducibility, validity and the effect of energy mis-reporting. There is a wealth of research suggesting that the human diet can be described in terms of a limited number of eating patterns in healthy population groups using cluster analysis, where studies have accounted for differences in sex, age, socio-economic status, geographical area and weight status. Furthermore, patterns have been used to explore relationships with health and chronic diseases and more recently with nutritional biomarkers, suggesting that these patterns are biologically meaningful. Overall, it is apparent that consistent trends emerge when using cluster analysis to derive dietary patterns; however, future studies should focus on the inconsistencies in methodology and the effect of energy mis-reporting.", "title": "" }, { "docid": "cafdc8bb8b86171026d5a852e7273486", "text": "A majority of the existing algorithms which mine graph datasets target complete, frequent sub-graph discovery. We describe the graph-based data mining system Subdue which focuses on the discovery of sub-graphs which are not only frequent but also compress the graph dataset, using a heuristic algorithm. The rationale behind the use of a compression-based methodology for frequent pattern discovery is to produce a fewer number of highly interesting patterns than to generate a large number of patterns from which interesting patterns need to be identified. We perform an experimental comparison of Subdue with the graph mining systems gSpan and FSG on the Chemical Toxicity and the Chemical Compounds datasets that are provided with gSpan. We present results on the performance on the Subdue system on the Mutagenesis and the KDD 2003 Citation Graph dataset. An analysis of the results indicates that Subdue can efficiently discover best-compressing frequent patterns which are fewer in number but can be of higher interest.", "title": "" }, { "docid": "5217c15c210a9475082329dff72811a2", "text": "This paper describes the USAAR-WLV taxonomy induction system that participated in the Taxonomy Extraction Evaluation task of SemEval-2015. We extend prior work on using vector space word embedding models for hypernym-hyponym extraction by simplifying the means to extract a projection matrix that transforms any hyponym to its hypernym. This is done by making use of function words, which are usually overlooked in vector space approaches to NLP. Our system performs best in the chemical domain and has achieved competitive results in the overall evaluations.", "title": "" }, { "docid": "fd3fb8803a618ff9f738b10dc484f6bc", "text": "Various studies on consumer purchasing behaviors have been presented and used in real problems. Data mining techniques are expected to be a more effective tool for analyzing consumer behaviors. However, the data mining method has disadvantages as well as advantages. Therefore, it is important to select appropriate techniques to mine databases. The objective of this paper is to improve conventional data mining analysis by applying several methods including fuzzy clustering, principal component analysis, and discriminate analysis. Many defects included in the conventional methods are improved in the paper. Moreover, in an experiment, association rule is employed to mine rules for trusted customers using sales data in a fiber industry", "title": "" }, { "docid": "2ab1f2d0ca28851dcc36721686a06fa2", "text": "A quarter-century ago visual neuroscientists had little information about the number and organization of retinotopic maps in human visual cortex. The advent of functional magnetic resonance imaging (MRI), a non-invasive, spatially-resolved technique for measuring brain activity, provided a wealth of data about human retinotopic maps. Just as there are differences amongst non-human primate maps, the human maps have their own unique properties. Many human maps can be measured reliably in individual subjects during experimental sessions lasting less than an hour. The efficiency of the measurements and the relatively large amplitude of functional MRI signals in visual cortex make it possible to develop quantitative models of functional responses within specific maps in individual subjects. During this last quarter-century, there has also been significant progress in measuring properties of the human brain at a range of length and time scales, including white matter pathways, macroscopic properties of gray and white matter, and cellular and molecular tissue properties. We hope the next 25years will see a great deal of work that aims to integrate these data by modeling the network of visual signals. We do not know what such theories will look like, but the characterization of human retinotopic maps from the last 25years is likely to be an important part of future ideas about visual computations.", "title": "" }, { "docid": "bf5d53e5465dd5e64385bf9204324059", "text": "A model of core losses, in which the hysteresis coefficients are variable with the frequency and induction (flux density) and the eddy-current and excess loss coefficients are variable only with the induction, is proposed. A procedure for identifying the model coefficients from multifrequency Epstein tests is described, and examples are provided for three typical grades of non-grain-oriented laminated steel suitable for electric motor manufacturing. Over a wide range of frequencies between 20-400 Hz and inductions from 0.05 to 2 T, the new model yielded much lower errors for the specific core losses than conventional models. The applicability of the model for electric machine analysis is also discussed, and examples from an interior permanent-magnet and an induction motor are included.", "title": "" }, { "docid": "8d0ce09e523001eb9d34d38108a2f603", "text": "In this paper we describe a point-based approach for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. The deformation gradient is computed for each particle by finding the affine transformation that best approximates the motion of neighboring particles over a single timestep. These transformations are then composed to compute the total deformation gradient that describes the deformation around a particle over the course of the simulation. Given the deformation gradient we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. We demonstrate our approach on a number of examples that exhibit a wide range of material behaviors.", "title": "" } ]
scidocsrr
bda5ec5787d1cbc373b70793574caf8a
Accurate wild animal recognition using PCA, LDA and LBPH
[ { "docid": "ff0e2291d873bef32de852f0b7b1fedb", "text": "Face recognition is one of the most successful applications of image analysis and understanding and has gained much attention in recent years. Various algorithms were proposed and research groups across the world reported different and often contradictory results when comparing them. The aim of this paper is to present an independent, comparative study of three most popular appearance-based face recognition projection methods (PCA, ICA, and LDA) in completely equal working conditions regarding preprocessing and algorithm implementation. We are motivated by the lack of direct and detailed independent comparisons of all possible algorithm implementations (e.g., all projection–metric combinations) in available literature. For consistency with other studies, FERET data set is used with its standard tests (gallery and probe sets). Our results show that no particular projection–metric combination is the best across all standard FERET tests and the choice of appropriate projection–metric combination can only be made for a specific task. Our results are compared to other available studies and some discrepancies are pointed out. As an additional contribution, we also introduce our new idea of hypothesis testing across all ranks when comparing performance results. VC 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 252–260, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20059", "title": "" } ]
[ { "docid": "eccab5aa27f2350416dd2a04524dc658", "text": "Cryptochromes (CRY) are blue light photoreceptors that mediate various light-induced responses in plants and animals. Arabidopsis CRY (CRY1 and CRY2) functions through negatively regulating constitutive photomorphogenic (COP) 1, a repressor of photomorphogenesis. Water evaporation and photosynthesis are regulated by the stomatal pores in plants, which are closed in darkness but open in response to blue light. There is evidence only for the phototropin blue light receptors (PHOT1 and PHOT2) in mediating blue light regulation of stomatal opening. Here, we report a previously uncharacterized role for Arabidopsis CRY and COP1 in the regulation of stomatal opening. Stomata of the cry1 cry2 double mutant showed reduced blue light response, whereas those of the CRY1-overexpressing plants showed hypersensitive response to blue light. In addition, stomata of the phot1 phot2 double mutant responded to blue light, but those of the cry1 cry2 phot1 phot2 quadruple mutant hardly responded. Strikingly, stomata of the cop1 mutant were constitutively open in darkness and stomata of the cry1 cry2 cop1 and phot1 phot2 cop1 triple mutants were open as wide as those of the cop1 single mutant under blue light. These results indicate that CRY functions additively with PHOT in mediating blue light-induced stomatal opening and that COP1 is a repressor of stomatal opening and likely acts downstream of CRY and PHOT signaling pathways.", "title": "" }, { "docid": "d0e6cc1204c6957bf957e3bfcf4eee6e", "text": "INTRODUCTION\nVolatiles are easily accessible and widely used in a form of liquid petroleum gas. Death as a consequence of inhalation of volatiles can be either accidental or suicidal.\n\n\nCASE OUTLINE\nWe present a 62-year-old men who committed suicide by placing a plastic bag over the head and inhaling propane-butane mixture from a domestic gas tank. A rubber hose was attached to the tank valve and connected with the plastic bag. The body of the deceased showed signs of advanced postmortal changes. A suicidal note was found at the scene.\n\n\nCONCLUSION\nPropane-butane mixture, i.e. liquified petroleum gas leads to the depletion of oxygen in the air consequently causing hypoxia and anoxia, and therefore, unconsciousness and eventual death. The mechanisms of death in cases of volatile inhalation include cardiac arrhythmias, reflex cardiac vagal inhibition, and/or central nervous system depression. Similar mechanisms occur in cases of asphyxiation with a plastic bag. The reconstruction of the event in this case was based, not so much on autopsy findings (because of significant putrefaction changes), but on the police investigation and traces found at the scene.", "title": "" }, { "docid": "272d169020eda0983de52b88c9186501", "text": "Personas are user models that represent the user characteristics. In this paper we describe a Persona creation process which combines the quantitative method such as cluster analysis with qualitative method such as observation and interview to produce convincing and representative Personas. We illustrate the Personas creation process through a case study. We use cluster analysis to group the users by their similarities in goals and decision-making preference.", "title": "" }, { "docid": "fcea869f6aafdc0d341c87073422256f", "text": "Table A1 summarizes the various characteristics of the synthetic models used in the experiments, including the number of event types, the size of the state space, whether a challenging construct is contained (loops, duplicates, nonlocal choice, and concurrency), and the entropy of the process defined by the model (estimated based on a sample of size 10,000). The original models may contain either duplicate tasks (two conceptually different transitions with the same label) or invisible tasks (transitions that have no label, as their firing is not recorded in the event log). We transformed all invisible transitions to duplicates such that, when there was an invisible task i in the original model, we added duplicates for all transitions t that, when fired, enable the invisible transition. These duplicates emulate the combined firing of t and i. Since we do not distinguish between duplicates and invisible tasks, we combined this category.", "title": "" }, { "docid": "a40d3b98ab50a5cd924be09ab1f1cc40", "text": "Feeling comfortable reading and understanding financial statements is critical to the success of healthcare executives and physicians involved in management. Businesses use three primary financial statements: a balance sheet represents the equation, Assets = Liabilities + Equity; an income statement represents the equation, Revenues - Expenses = Net Income; a statement of cash flows reports all sources and uses of cash during the represented period. The balance sheet expresses financial indicators at one particular moment in time, whereas the income statement and the statement of cash flows show activity that occurred over a stretch of time. Additional information is disclosed in attached footnotes and other supplementary materials. There are two ways to prepare financial statements. Cash-basis accounting recognizes revenue when it is received and expenses when they are paid. Accrual-basis accounting recognizes revenue when it is earned and expenses when they are incurred. Although cash-basis is acceptable, periodically using the accrual method reveals important information about receivables and liabilities that could otherwise remain hidden. Become more engaged with your financial statements by spending time reading them, tracking key performance indicators, and asking accountants and financial advisors questions. This will help you better understand your business and build a successful future.", "title": "" }, { "docid": "26065fd6e8451c178cc19d2e71da4cc7", "text": "Urtica dioica or stinging nettle is traditionally used as an herbal medicine in Western Asia. The current study represents the investigation of antimicrobial activity of U. dioica from nine crude extracts that were prepared using different organic solvents, obtained from two extraction methods: the Soxhlet extractor (Method I), which included the use of four solvents with ethyl acetate and hexane, or the sequential partitions (Method II) with a five solvent system (butanol). The antibacterial and antifungal activities of crude extracts were tested against 28 bacteria, three yeast strains and seven fungal isolates by the disc diffusion and broth dilution methods. Amoxicillin was used as positive control for bacteria strains, vancomycin for Streptococcus sp., miconazole nitrate (30 microg/mL) as positive control for fungi and yeast, and pure methanol (v/v) as negative control. The disc diffusion assay was used to determine the sensitivity of the samples, whilst the broth dilution method was used for the determination of the minimal inhibition concentration (MIC). The ethyl acetate and hexane extract from extraction method I (EA I and HE I) exhibited highest inhibition against some pathogenic bacteria such as Bacillus cereus, MRSA and Vibrio parahaemolyticus. A selection of extracts that showed some activity was further tested for the MIC and minimal bactericidal concentrations (MBC). MIC values of Bacillus subtilis and Methicillin-resistant Staphylococcus aureus (MRSA) using butanol extract of extraction method II (BE II) were 8.33 and 16.33mg/mL, respectively; while the MIC value using ethyl acetate extract of extraction method II (EAE II) for Vibrio parahaemolyticus was 0.13mg/mL. Our study showed that 47.06% of extracts inhibited Gram-negative (8 out of 17), and 63.63% of extracts also inhibited Gram-positive bacteria (7 out of 11); besides, statistically the frequency of antimicrobial activity was 13.45% (35 out of 342) which in this among 21.71% belongs to antimicrobial activity extracts from extraction method I (33 out of 152 of crude extracts) and 6.82% from extraction method II (13 out of 190 of crude extracts). However, crude extracts from method I exhibited better antimicrobial activity against the Gram-positive bacteria than the Gram-negative bacteria. The positive results on medicinal plants screening for antibacterial activity constitutes primary information for further phytochemical and pharmacological studies. Therefore, the extracts could be suitable as antimicrobial agents in pharmaceutical and food industry.", "title": "" }, { "docid": "a3be253034ffcf61a25ad265fda1d4ff", "text": "With the development of automated logistics systems, flexible manufacture systems (FMS) and unmanned automated factories, the application of automated guided vehicle (AGV) gradually become more important to improve production efficiency and logistics automatism for enterprises. The development of the AGV systems play an important role in reducing labor cost, improving working conditions, unifying information flow and logistics. Path planning has been a key issue in AGV control system. In this paper, two key problems, shortest time path planning and collision in multi AGV have been solved. An improved A-Star (A*) algorithm is proposed, which introduces factors of turning, and edge removal based on the improved A* algorithm is adopted to solve k shortest path problem. Meanwhile, a dynamic path planning method based on A* algorithm which searches effectively the shortest-time path and avoids collision has been presented. Finally, simulation and experiment have been conducted to prove the feasibility of the algorithm.", "title": "" }, { "docid": "5a252b484da12138df0b06ee105690b6", "text": "Copyright c ©2000, Ulf Nilsson and Jan MaÃluszyński. The book may be downloaded and printed for personal use only provided that the text (1) is not altered in any way, and (2) is accompanied by this copyright notice. The book may also be copied and distributed in paper-form for non-profit use only. No other form of distribution is allowed. It is not allowed to distribute the book electronically.", "title": "" }, { "docid": "194db5da505acab27bbe14232b255d09", "text": "Latent Dirichlet allocation defines hidden topics to capture latent semantics in text documents. However, it assumes that all the documents are represented by the same topics, resulting in the “forced topic” problem. To solve this problem, we developed a group latent Dirichlet allocation (GLDA). GLDA uses two kinds of topics: local topics and global topics. The highly related local topics are organized into groups to describe the local semantics, whereas the global topics are shared by all the documents to describe the background semantics. GLDA uses variational inference algorithms for both offline and online data. We evaluated the proposed model for topic modeling and document clustering. Our experimental results indicated that GLDA can achieve a competitive performance when compared with state-of-the-art approaches.", "title": "" }, { "docid": "6ed28770b6709dcd24d4c255bdf48558", "text": "Symbolic execution is a promising testing and analysis methodology. It systematically explores a program's execution space and can generate test cases with high coverage. One significant practical challenge for symbolic execution is how to effectively explore the enormous number of program paths in real-world programs. Various heuristics have been proposed for guiding symbolic execution, but they are generally inefficient and ad-hoc. In this paper, we introduce a novel, unified strategy to guide symbolic execution to less explored parts of a program. Our key idea is to exploit a specific type of path spectra, namely the length-n subpath program spectra, to systematically approximate full path information for guiding path exploration. In particular, we use frequency distributions of explored length-n subpaths to prioritize \"less traveled\" parts of the program to improve test coverage and error detection. We have implemented our general strategy in KLEE, a state-of-the-art symbolic execution engine. Evaluation results on the GNU Coreutils programs show that (1) varying the length n captures program-specific information and exhibits different degrees of effectiveness, and (2) our general approach outperforms traditional strategies in both coverage and error detection.", "title": "" }, { "docid": "42db54e2a2edc93e4f69bb7efbe6d7a7", "text": "A simple planar antenna of high gain for long term evolution (LTE), global system for mobile communications (GSM), Bluetooth, worldwide interoperability for microwave access (WiMAX), digital cellular services (DCS), personal communication services (PCS), wireless local area network (WLAN), and global positioning system (GPS) mobile terminals is presented. The antenna has wide -6 dB reflection coefficient bandwidth of 2.17 GHz (1.33 - 3.5 GHz). The antenna has a compact size of 44.9 mm ×35.5 mm × 0.87 mm placed one side of the substrate of dimension 35.5 mm × 75.5 mm. The antenna has high gain of 3.85 dBi at 2.7 GHz, maximum directivity is 5.04 dBi at 2.7 GHz and total efficiency of the antenna varies between 61% and 95% within the wide impedance bandwidth. In addition, the antenna gain and efficiency is almost constant in individual operating band of operation. Dimensions of the antenna are optimized for wide impedance band. Antenna radiation patterns are presented at 1.5, 2, 2.5, and 3 GHz.", "title": "" }, { "docid": "f1dfe8970376c9f71376c1139b2ffd5d", "text": "Two groups were contracted to experiment with coding of FACS (Ekman & Friesen, 1978) action units on a common database. One group is ours at CMU and the University of Pittsburgh, and the other is at UCSD. The database is from Frank and Ekman (1997) who video-recorded an interrogation in which subjects lied or told the truth about a mock crime. Subjects were ethnically diverse, action units occurred during speech, and out-of-plane motion and occlusion from head motion and glasses were common. The video data were originally collected to answer substantive questions in psychology, and represent a substantial challenge to automated AU recognition. This report describes the results of automated facial expression analysis by the CMU/Pittsburgh group. An interdisciplinary team of consultants, who have combined expertise in computer vision and in facial analysis, will compare the results of this report with those in a separate report submitted by UCSD group.", "title": "" }, { "docid": "1d0241833add973cc7cf6117735b7a1a", "text": "This paper describes the conception and the construction of a low cost spin coating machine incorporating inexpensive electronic components and open-source technology based on Arduino platform. We present and discuss the details of the electrical, mechanical and control parts. This system will coat thin film in a micro level thickness and the microcontroller ATM 328 circuit controls and adjusts the spinning speed. We prepare thin films with good uniformity for various thicknesses by this spin coating system. The thickness and uniformity of deposited films were verified by determining electronic absorption spectra. We show that thin film thickness depends on the spin speed in the range of 2000–3500 rpm. We compare the results obtained on TiO2 layers deposited by our developed system to those grown by using a standard commercial spin coating systems.", "title": "" }, { "docid": "9e3263866208bbc6a9019b3c859d2a66", "text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.", "title": "" }, { "docid": "813e41234aad749022a4d655af987ad6", "text": "Three- and four-element eyepiece designs are presented each with a different type of radial gradient-index distribution. Both quadratic and modified quadratic index profiles are shown to provide effective control of the field aberrations. In particular, the three-element design with a quadratic index profile demonstrates that the inhomogeneous power contribution can make significant contributions to the overall system performance, especially the astigmatism correction. Using gradient-index components has allowed for increased eye relief and field of view making these designs comparable with five- and six-element ones.", "title": "" }, { "docid": "7c254a96816b8ad1aa68a9a4927b3764", "text": "The purpose of this study is to explore cost and management accounting practices utilized by manufacturing companies operating in Istanbul, Turkey. The sample of the study consists of 61 companies, containing both small and medium-sized enterprises, and large companies. The data collection methodology of the study is questionnaire survey. The content of the questionnaire survey is based on several previous studies. The major findings of the study are as follows: the most widely used product costing method is job costing; the complexity in production poses as the highest ranking difficulty in product costing; the most widely used three overhead allocation bases are prime costs, units produced, and direct labor cost; pricing decisions is the most important area where costing information is used; overall mean of the ratio of overhead to total cost is 34.48 percent for all industries; and the most important three management accounting practices are budgeting, planning and control, and cost-volume-profit analysis. Furthermore, decreasing profitability, increasing costs and competition, and economic crises are the factors, which increase the perceived importance of cost accounting. The findings indicate that companies perceive traditional management accounting tools still important. However, new management accounting practices such as strategic planning, and transfer pricing are perceived less important than traditional ones. Therefore, companies need to improve themselves in this aspect.", "title": "" }, { "docid": "e938ad7500cecd5458e4f68e564e6bc4", "text": "In this article, an adaptive fuzzy sliding mode control (AFSMC) scheme is derived for robotic systems. In the AFSMC design, the sliding mode control (SMC) concept is combined with fuzzy control strategy to obtain a model-free fuzzy sliding mode control. The equivalent controller has been replaced by a fuzzy system and the uncertainties are estimated online. The approach of the AFSMC has the learning ability to generate the fuzzy control actions and adaptively compensates for the uncertainties. Despite the high nonlinearity and coupling effects, the control input of the proposed control algorithm has been decoupled leading to a simplified control mechanism for robotic systems. Simulations have been carried out on a two link planar robot. Results show the effectiveness of the proposed control system.", "title": "" }, { "docid": "bb77764661019656a621aebbe4c11db1", "text": "Variation in personality traits is 30-60% attributed to genetic influences. Attempts to unravel these genetic influences at the molecular level have, so far, been inconclusive. We performed the first genome-wide association study of Cloninger's temperament scales in a sample of 5117 individuals, in order to identify common genetic variants underlying variation in personality. Participants' scores on Harm Avoidance, Novelty Seeking, Reward Dependence, and Persistence were tested for association with 1,252,387 genetic markers. We also performed gene-based association tests and biological pathway analyses. No genetic variants that significantly contribute to personality variation were identified, while our sample provides over 90% power to detect variants that explain only 1% of the trait variance. This indicates that individual common genetic variants of this size or greater do not contribute to personality trait variation, which has important implications regarding the genetic architecture of personality and the evolutionary mechanisms by which heritable variation is maintained.", "title": "" }, { "docid": "ec9810e7def2ae57493996b460540af0", "text": "PURPOSE\nTo describe the results of a diabetic retinopathy screening program implemented in a primary care area.\n\n\nMETHODS\nA retrospective study was conducted using data automatically collected since the program began on 1 January 2007 until 31 December 2015.\n\n\nRESULTS\nThe number of screened diabetic patients has progressively increased, from 7,173 patients in 2007 to 42,339 diabetic patients in 2015. Furthermore, the ability of family doctors to correctly interpret retinographies has improved, with the proportion of retinal images classified as normal having increased from 55% in 2007 to 68% at the end of the study period. The proportion of non-evaluable retinographies decreased to 7% in 2015, having peaked at 15% during the program. This was partly due to a change in the screening program policy that allowed the use of tropicamide. The number of severe cases detected has declined, from 14% with severe non-proliferative and proliferativediabetic retinopathy in the initial phase of the program to 3% in 2015.\n\n\nCONCLUSIONS\nDiabetic eye disease screening by tele-ophthalmology has shown to be a valuable method in a growing population of diabetics. It leads to a regular medical examination of patients, helps ease the workload of specialised care services and favours the early detection of treatable cases. However, the results of implementing a program of this type are not immediate, achieving only modest results in the early years of the project that have improved over subsequent years.", "title": "" }, { "docid": "ae83e004c2b8f4f85f31b03ad2c596f6", "text": "Approximating a probability density in a tractable manner is a central task in Bayesian statistics. Variational Inference (VI) is a popular technique that achieves tractability by choosing a relatively simple variational approximation. Borrowing ideas from the classic boosting framework, recent approaches attempt to boost VI by replacing the selection of a single density with an iteratively constructed mixture of densities. In order to guarantee convergence, previous works impose stringent assumptions that require significant effort for practitioners. Specifically, they require a custom implementation of the greedy step (called the LMO) for every probabilistic model with respect to an unnatural variational family of truncated distributions. Our work fixes these issues with novel theoretical and algorithmic insights. On the theoretical side, we show that boosting VI satisfies a relaxed smoothness assumption which is sufficient for the convergence of the functional Frank-Wolfe (FW) algorithm. Furthermore, we rephrase the LMO problem and propose to maximize the Residual ELBO (RELBO) which replaces the standard ELBO optimization in VI. These theoretical enhancements allow for black box implementation of the boosting subroutine. Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.", "title": "" } ]
scidocsrr
a9db391132eb8cc72a652f1fb1c77869
Oh that's what you meant!: reducing emoji misunderstanding
[ { "docid": "8bc418be099f14d677d3fdfbfa516248", "text": "The present study examines the influence of social context on the use of emoticons in Internet communication. Secondary school students (N = 158) responded to short internet chats. Social context (task-oriented vs. socio-emotional) and valence of the context (positive vs. negative) were manipulated in these chats. Participants were permitted to respond with text, emoticon or a combination of both. Results showed that participants used more emoticons in socio-emotional than in task-oriented social contexts. Furthermore, students used more positive emoticons in positive contexts and more negative emoticons in negative contexts. An interaction was found between valence and kind of context; in negative, task-oriented contexts subjects used the least emoticons. Results are related to research about the expression of emotions in face-to-face interaction. 2004 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "c2fe4bc93234330e1cce3924f65768ad", "text": "1 Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy 2 Department of Psychology, University of Sheffield, Sheffield, UK 3 Department of Clinical and Social Sciences in Psychology, University of Rochester, River, New York, USA 4 Department of Computer Science, University of Massachusetts Amherst, Massachusetts, USA *Correspondence: gianluca.baldassarre@istc.cnr.it", "title": "" }, { "docid": "0c77e3923dfae2b31824ce1285e6d5fd", "text": "1 ACKNOWLEDGEMENTS 2", "title": "" }, { "docid": "2961236d4e91c1c4837cae61ae52a45d", "text": "Searchable Symmetric Encryption (SSE) when deployed in the cloud allows one to query encrypted data without the risk of data leakage. Despite the widespread interest, existing surveys do not examine in detail how SSE’s underlying structures are designed and how these result in the many properties of a SSE scheme. This is the gap we seek to address, as well as presenting recent state-of-the-art advances on SSE. Specifically, we present a general framework and believe the discussions may lead to insights for potential new designs. We draw a few observations. First, most schemes use index table, where optimal index size and sublinear search can be achieved using an inverted index. Straightforward updating can only be achieved using direct index, but search time would be linear. A recent trend is the combinations of index table, and tree, deployed for efficient updating and storage. Secondly, mechanisms from related fields such as Oblivious RAM (ORAM) have been integrated to reduce leakages. However, using these mechanisms to minimise leakages in schemes with richer functionalities (e.g., ranked, range) is relatively unexplored. Thirdly, a new approach (e.g., multiple servers) is required to mitigate new and emerging attacks on leakage. Lastly, we observe that a proposed index may not be practically efficient when implemented, where I/O access must be taken into consideration.", "title": "" }, { "docid": "60a7d21510c3c3861d49c6294859c8b7", "text": "As mobile applications become more complex, specific development tools and frameworks as well as cost effective testing techniques and tools will be essential to assure the development of secure, high-quality mobile applications. This paper addresses the problem of automatic testing of mobile applications developed for the Google Android platform, and presents a technique for rapid crash testing and regression testing of Android applications. The technique is based on a crawler that automatically builds a model of the application GUI and obtains test cases that can be automatically executed. The technique is supported by a tool for both crawling the application and generating the test cases. In the paper we present an example of using the technique and the tool for testing a real small size Android application that preliminary shows the effectiveness and usability of the proposed testing approach.", "title": "" }, { "docid": "6b57c73406000ca0683b275c7e164c24", "text": "In this letter, a novel compact and broadband integrated transition between a laminated waveguide and an air-filled rectangular waveguide operating in Ka band is proposed. A three-pole filter equivalent circuit model is employed to interpret the working mechanism and to predict the performance of the transition. A back-to-back prototype of the proposed transition is designed and fabricated for proving the concept. Good agreement of the measured and simulated results is obtained. The measured result shows that the insertion loss of better than 0.26 dB from 34.8 to 37.8 GHz can be achieved.", "title": "" }, { "docid": "23cc56093faaf75eccbb68e8ac59949a", "text": "This paper presents a novel five-phase fault tolerant interior permanent magnet (IPM) motor drive with higher performance and reliability for electric vehicles applications. A new machine design along with efficient control strategy is developed for fault tolerant operation of electric drive without severely compromising the drive performance. Fault tolerance is achieved by employing a five phase fractional-slot concentrated windings configuration IPM motor drive, with each phase electrically, magnetically, thermally and physically independent of all the others. The proposed electric drive system presents higher torque density, negligible cogging torque, and about ±0.5% torque ripple. Power converter requirements are discussed and control strategies to minimize the impact of machine or converter fault are developed. Besides, all the requirement of a fault tolerant operation, including high phase inductance and negligible mutual coupling between phases are met. Analytical and finite element analysis and comparison case studies are presented.", "title": "" }, { "docid": "83473c46bb460f262046a15fb8e4053a", "text": "Niclosamide is an oral antihelminthic drug used to treat parasitic infections in millions of people worldwide. However recent studies have indicated that niclosamide may have broad clinical applications for the treatment of diseases other than those caused by parasites. These diseases and symptoms may include cancer, bacterial and viral infection, metabolic diseases such as Type II diabetes, NASH and NAFLD, artery constriction, endometriosis, neuropathic pain, rheumatoid arthritis, sclerodermatous graft-versus-host disease, and systemic sclerosis. Among the underlying mechanisms associated with the drug actions of niclosamide are uncoupling of oxidative phosphorylation, and modulation of Wnt/β-catenin, mTORC1, STAT3, NF-κB and Notch signaling pathways. Here we provide a brief overview of the biological activities of niclosamide, its potential clinical applications, and its challenges for use as a new therapy for systemic diseases.", "title": "" }, { "docid": "7401b3a6801b5c1349d961434ca69a3d", "text": "developed out of a need to solve a problem. The problem was posed, in the late 1960s, to the Optical Sciences Center (OSC) at the University of Arizona by the US Air Force. They wanted to improve the images of satellites taken from earth. The earth's atmosphere limits the image quality and exposure time of stars and satellites taken with telescopes over 5 inches in diameter at low altitudes and 10 to 12 inches in diameter at high altitudes. Dr. Aden Mienel was director of the OSC at that time. He came up with the idea of enhancing images of satellites by measuring the Optical Transfer Function (OTF) of the atmosphere and dividing the OTF of the image by the OTF of the atmosphere. The trick was to measure the OTF of the atmosphere at the same time the image was taken and to control the exposure time so as to capture a snapshot of the atmospheric aberrations rather than to average over time. The measured wavefront error in the atmosphere should not change more than ␭/10 over the exposure time. The exposure time for a low earth orbit satellite imaged from a mountaintop was determined to be about 1/60 second. Mienel was an astronomer and had used the standard Hartmann test (Fig 1), where large wooden or cardboard panels were placed over the aperture of a large telescope. The panels had an array of holes that would allow pencils of rays from stars to be traced through the telescope system. A photographic plate was placed inside and outside of focus, with a sufficient separation, so the pencil of rays would be separated from each other. Each hole in the panel would produce its own blurry image of the star. By taking two images a known distance apart and measuring the centroid of the images, one can trace the rays through the focal plane. Hartmann used these ray traces to calculate figures of merit for large telescopes. The data can also be used to make ray intercept curves (H'-tan U'). When Mienel could not cover the aperture while taking an image of the satellite, he came up with the idea of inserting a beam splitter in collimated space behind the eyepiece and placing a plate with holes in it at the image of the pupil. Each hole would pass a pencil of rays to a vidicon tube (this was before …", "title": "" }, { "docid": "ba085cc5591471b8a46e391edf2e78d4", "text": "Despite recent successes, pose estimators are still somewhat fragile, and they frequently rely on a precise knowledge of the location of the object. Unfortunately, articulated objects are also very difficult to detect. Knowledge about the articulated nature of these objects, however, can substantially contribute to the task of finding them in an image. It is somewhat surprising, that these two tasks are usually treated entirely separately. In this paper, we propose an Articulated Part-based Model (APM) for jointly detecting objects and estimating their poses. APM recursively represents an object as a collection of parts at multiple levels of detail, from coarse-to-fine, where parts at every level are connected to a coarser level through a parent-child relationship (Fig. 1(b)-Horizontal). Parts are further grouped into part-types (e.g., left-facing head, long stretching arm, etc) so as to model appearance variations (Fig. 1(b)-Vertical). By having the ability to share appearance models of part types and by decomposing complex poses into parent-child pairwise relationships, APM strikes a good balance between model complexity and model richness. Extensive quantitative and qualitative experiment results on public datasets show that APM outperforms state-of-the-art methods. We also show results on PASCAL 2007 - cats and dogs - two highly challenging articulated object categories.", "title": "" }, { "docid": "c898f6186ff15dff41dcb7b3376b975d", "text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.", "title": "" }, { "docid": "0c1377f5e552a091df470a1bfa63c8b3", "text": "PURPOSE\nWe present a comprehensive account of clitoral anatomy, including its component structures, neurovascular supply, relationship to adjacent structures (the urethra, vagina and vestibular glands, and connective tissue supports), histology and immunohistochemistry. We related recent anatomical findings to the historical literature to determine when data on accurate anatomy became available.\n\n\nMATERIALS AND METHODS\nAn extensive review of the current and historical literature was done. The studies reviewed included dissection and microdissection, magnetic resonance imaging (MRI), 3-dimensional sectional anatomy reconstruction, histology and immunohistochemical studies.\n\n\nRESULTS\nThe clitoris is a multiplanar structure with a broad attachment to the pubic arch and via extensive supporting tissue to the mons pubis and labia. Centrally it is attached to the urethra and vagina. Its components include the erectile bodies (paired bulbs and paired corpora, which are continuous with the crura) and the glans clitoris. The glans is a midline, densely neural, non-erectile structure that is the only external manifestation of the clitoris. All other components are composed of erectile tissue with the composition of the bulbar erectile tissue differing from that of the corpora. The clitoral and perineal neurovascular bundles are large, paired terminations of the pudendal neurovascular bundles. The clitoral neurovascular bundles ascend along the ischiopubic rami to meet each other and pass along the superior surface of the clitoral body supplying the clitoris. The neural trunks pass largely intact into the glans. These nerves are at least 2 mm in diameter even in infancy. The cavernous or autonomic neural anatomy is microscopic and difficult to define consistently. MRI complements dissection studies and clarifies the anatomy. Clitoral pharmacology and histology appears to parallel those of penile tissue, although the clinical impact is vastly different.\n\n\nCONCLUSIONS\nTypical textbook descriptions of the clitoris lack detail and include inaccuracies. It is impossible to convey clitoral anatomy in a single diagram showing only 1 plane, as is typically provided in textbooks, which reveal it as a flat structure. MRI provides a multiplanar representation of clitoral anatomy in the live state, which is a major advantage, and complements dissection materials. The work of Kobelt in the early 19th century provides a most comprehensive and accurate description of clitoral anatomy, and modern study provides objective images and few novel findings. The bulbs appear to be part of the clitoris. They are spongy in character and in continuity with the other parts of the clitoris. The distal urethra and vagina are intimately related structures, although they are not erectile in character. They form a tissue cluster with the clitoris. This cluster appears to be the locus of female sexual function and orgasm.", "title": "" }, { "docid": "29649adbb39f182af1d84aab476ff8bf", "text": "Users of the online shopping site Amazon are encouraged to post reviews of the products that they purchase. Little attempt is made by Amazon to restrict or limit the content of these reviews. The number of reviews for different products varies, but the reviews provide accessible and plentiful data for relatively easy analysis for a range of applications. This paper seeks to apply and extend the current work in the field of natural language processing and sentiment analysis to data retrieved from Amazon. Naive Bayes and decision list classifiers are used to tag a given review as positive or negative. The number of stars a user gives a product is used as training data to perform supervised machine learning. A corpus contains 50,000 product review from 15 products serves as the dataset of study. Top selling and reviewed books on the site are the primary focus of the experiments, but useful features of them that aid in accurate classification are compared to those most useful in classification of other media products. The features, such as bag-of-words and bigrams, are compared to one another in their effectiveness in correctly tagging reviews. Errors in classification and general difficulties regarding the selection of features are analyzed and discussed.", "title": "" }, { "docid": "40985eaaab8dee09b4641539771bb02d", "text": "Sentence ranking is one of the most important research issues in text analysis. It can be used in text summarization and information retrieval. Graph-based methods are a common way of ranking and extracting sentences. In graph based methods, sentences are nodes of graph and edges are built based on the sentence similarities or on sentence co-occurrence relationship. PageRank style algorithms can be applied to get sentence ranks. In this paper, we focus on how to rank sentences in a single scientific paper. A scientific literature has more structural information than general texts and this structural information has not been fully explored yet in graph based ranking models. We investigated several different methods that used the is-part-of link on paragraph and section and similar link and co-occurrence link to construct a heterogeneous graph for ranking sentences. We conducted experiments on these methods to compare the results on sentence ranking. It shows that structural information can help identify more representative sentences.", "title": "" }, { "docid": "de81c39f2a87229710009776323b8a3b", "text": "Real-time bidding (RTB) is an important mechanism in online display advertising, where a proper bid for each page view plays an essential role for good marketing results. Budget constrained bidding is a typical scenario in RTB where the advertisers hope to maximize the total value of the winning impressions under a pre-set budget constraint. However, the optimal bidding strategy is hard to be derived due to the complexity and volatility of the auction environment. To address these challenges, in this paper, we formulate budget constrained bidding as a Markov Decision Process and propose a model-free reinforcement learning framework to resolve the optimization problem. Our analysis shows that the immediate reward from environment is misleading under a critical resource constraint. Therefore, we innovate a reward function design methodology for the reinforcement learning problems with constraints. Based on the new reward design, we employ a deep neural network to learn the appropriate reward so that the optimal policy can be learned effectively. Different from the prior model-based work, which suffers from the scalability problem, our framework is easy to be deployed in large-scale industrial applications. The experimental evaluations demonstrate the effectiveness of our framework on large-scale real datasets.", "title": "" }, { "docid": "9096faacedfeca72df5d2d37c1816a03", "text": "Large-scale, self-organizing wireless sensor and mesh network deployments are being driven by recent technological developments such as The Internet of Things (IoT), Smart Grids and Smart Environment applications. Efficient use of the limited energy resources of wireless sensor network (WSN) nodes is critically important to support these advances, and application of topology control methods will have a profound impact on energy efficiency and hence battery lifetime. In this survey, we focus on the energy efficiency issue and present a comprehensive study of topology control techniques for extending the lifetime of battery powered WSNs. First, we review the significant topology control algorithms to provide insights into how energy efficiency is achieved by design. Further, these algorithms are classified according to the energy conservation approach they adopt, and evaluated by the trade-offs they offer to aid designers in selecting a technique that best suits their applications. Since the concept of \"network lifetime\" is widely used for assessing the algorithms' performance, we highlight various definitions of the term and discuss their merits and drawbacks. Recently, there has been growing interest in algorithms for non-planar topologies such as deployments in underwater environments or multi-level buildings. For this reason, we also include a detailed discussion of topology control algorithms that work efficiently in three dimensions. Based on the outcomes of our review, we identify a number of open research issues for achieving energy efficiency through topology control.", "title": "" }, { "docid": "1d7d96d37584398359f9b85bc7741578", "text": "BACKGROUND\nTwo types of soft tissue filler that are in common use are those formulated primarily with calcium hydroxylapatite (CaHA) and those with cross-linked hyaluronic acid (cross-linked HA).\n\n\nOBJECTIVE\nTo provide physicians with a scientific rationale for determining which soft tissue fillers are most appropriate for volume replacement.\n\n\nMATERIALS\nSix cross-linked HA soft tissue fillers (Restylane and Perlane from Medicis, Scottsdale, AZ; Restylane SubQ from Q-Med, Uppsala, Sweden; and Juvéderm Ultra, Juvéderm Ultra Plus, and Juvéderm Voluma from Allergan, Pringy, France) and a soft tissue filler consisting of CaHA microspheres in a carrier gel containing carboxymethyl cellulose (Radiesse, BioForm Medical, Inc., San Mateo, CA). METHODS The viscosity and elasticity of each filler gel were quantified according to deformation oscillation measurements conducted using a Thermo Haake RS600 Rheometer (Newington, NH) using a plate and plate geometry with a 1.2-mm gap. All measurements were performed using a 35-mm titanium sensor at 30°C. Oscillation measurements were taken at 5 pascal tau (τ) over a frequency range of 0.1 to 10 Hz (interpolated at 0.7 Hz). Researchers chose the 0.7-Hz frequency because it elicited the most reproducible results and was considered physiologically relevant for stresses that are common to the skin. RESULTS The rheological measurements in this study support the concept that soft tissue fillers that are currently used can be divided into three groups. CONCLUSION Rheological evaluation enables the clinician to objectively classify soft tissue fillers, to select specific filler products based on scientific principles, and to reliably predict how these products will perform--lifting, supporting, and sculpting--after they are appropriately injected.", "title": "" }, { "docid": "e4a1200b7f8143b1322c8a66d625d842", "text": "This paper examines the spatial patterns of unemployment in Chicago between 1980 and 1990. We study unemployment clustering with respect to different social and economic distance metrics that reßect the structure of agents’ social networks. SpeciÞcally, we use physical distance, travel time, and differences in ethnic and occupational distribution between locations. Our goal is to determine whether our estimates of spatial dependence are consistent with models in which agents’ employment status is affected by information exchanged locally within their social networks. We present non-parametric estimates of correlation across Census tracts as a function of each distance metric as well as pairs of metrics, both for unemployment rate itself and after conditioning on a set of tract characteristics. Our results indicate that there is a strong positive and statistically signiÞcant degree of spatial dependence in the distribution of raw unemployment rates, for all our metrics. However, once we condition on a set of covariates, most of the spatial autocorrelation is eliminated, with the exception of physical and occupational distance. Racial and ethnic composition variables are the single most important factor in explaining the observed correlation patterns.", "title": "" }, { "docid": "43628e18a38d6cc9134fcf598eae6700", "text": "Purchase of dietary supplement products is increasing despite the lack of clinical evidence to support health needs for consumption. The purpose of this crosssectional study is to examine the factors influencing consumer purchase intention of dietary supplement products in Penang based on Theory of Planned Behaviour (TPB). 367 consumers were recruited from chain pharmacies and hypermarkets in Penang. From statistical analysis, the role of attitude differs from the original TPB model; attitude played a new role as the mediator in this dietary supplement products context. Findings concluded that subjective norms, importance of price and health consciousness affected dietary supplement products purchase intention indirectly through attitude formation, with 71.5% of the variance explained. Besides, significant differences were observed between dietary supplement products users and non-users in all variables. Dietary supplement product users have stronger intention to purchase dietary supplement products, more positive attitude, with stronger perceived social pressures to purchase, perceived more availability, place more importance of price and have higher level of health consciousness compared to nonusers. Therefore, in order to promote healthy living through natural ways, consumers’ attitude formation towards dietary supplement products should be the main focus. Policy maker, healthcare providers, educators, researchers and dietary supplement industry must be responsible and continue to work diligently to provide consumers with accurate dietary supplement products and healthy living information.", "title": "" }, { "docid": "324c0fe0d57734b54dd03e468b7b4603", "text": "This paper studies the use of received signal strength indicators (RSSI) applied to fingerprinting method in a Bluetooth network for indoor positioning. A Bayesian fusion (BF) method is proposed to combine the statistical information from the RSSI measurements and the prior information from a motion model. Indoor field tests are carried out to verify the effectiveness of the method. Test results show that the proposed BF algorithm achieves a horizontal positioning accuracy of about 4.7 m on the average, which is about 6 and 7 % improvement when compared with Bayesian static estimation and a point Kalman filter method, respectively.", "title": "" }, { "docid": "4df5ae1f7eae0c366bd5bdb30af80ad2", "text": "Robots inevitably fail, often without the ability to recover autonomously. We demonstrate an approach for enabling a robot to recover from failures by communicating its need for specific help to a human partner using natural language. Our approach automatically detects failures, then generates targeted spoken-language requests for help such as “Please give me the white table leg that is on the black table.” Once the human partner has repaired the failure condition, the system resumes full autonomy. We present a novel inverse semantics algorithm for generating effective help requests. In contrast to forward semantic models that interpret natural language in terms of robot actions and perception, our inverse semantics algorithm generates requests by emulating the human’s ability to interpret a request using the Generalized Grounding Graph (G) framework. To assess the effectiveness of our approach, we present a corpusbased online evaluation, as well as an end-to-end user study, demonstrating that our approach increases the effectiveness of human interventions compared to static requests for help.", "title": "" } ]
scidocsrr
41081ba185e0d6f2bb18067fa4054324
Deep Learning for Aspect Based Sentiment Detection
[ { "docid": "c7ff47c187c3f8be2a083bf581295f9a", "text": "The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i.e. semantic representations) of word sequences as well. We present a simple but efficient unsupervised objective to train distributed representations of sentences. Our method outperforms the state-of-the-art unsupervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings.", "title": "" } ]
[ { "docid": "e66f7a7e3fcb833edde92bba24cb7145", "text": "Essential oils are complex blends of a variety of volatile molecules such as terpenoids, phenol-derived aromatic components, and aliphatic components having a strong interest in pharmaceutical, sanitary, cosmetic, agricultural, and food industries. Since the middle ages, essential oils have been widely used for bactericidal, virucidal, fungicidal, antiparasitical, insecticidal, and other medicinal properties such as analgesic, sedative, anti-inflammatory, spasmolytic, and locally anaesthetic remedies. In this review their nanoencapsulation in drug delivery systems has been proposed for their capability of decreasing volatility, improving the stability, water solubility, and efficacy of essential oil-based formulations, by maintenance of therapeutic efficacy. Two categories of nanocarriers can be proposed: polymeric nanoparticulate formulations, extensively studied with significant improvement of the essential oil antimicrobial activity, and lipid carriers, including liposomes, solid lipid nanoparticles, nanostructured lipid particles, and nano- and microemulsions. Furthermore, molecular complexes such as cyclodextrin inclusion complexes also represent a valid strategy to increase water solubility and stability and bioavailability and decrease volatility of essential oils.", "title": "" }, { "docid": "5ae974ffec58910ea3087aefabf343f8", "text": "With the ever-increasing use of multimedia contents through electronic commerce and on-line services, the problems associated with the protection of intellectual property, management of large database and indexation of content are becoming more prominent. Watermarking has been considered as efficient means to these problems. Although watermarking is a powerful tool, there are some issues with the use of it, such as the modification of the content and its security. With respect to this, identifying content itself based on its own features rather than watermarking can be an alternative solution to these problems. The aim of fingerprinting is to provide fast and reliable methods for content identification. In this paper, we present a new approach for image fingerprinting using the Radon transform to make the fingerprint robust against affine transformations. Since it is quite easy with modern computers to apply affine transformations to audio, image and video, there is an obvious necessity for affine transformation resilient fingerprinting. Experimental results show that the proposed fingerprints are highly robust against most signal processing transformations. Besides robustness, we also address other issues such as pairwise independence, database search efficiency and key dependence of the proposed method. r 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c03b3070b22793dacee7ebc46d280fc8", "text": "In this paper, the vibration reduction problem is investigated for a flexible spacecraft during attitude maneuvering. A new control strategy is proposed, which integrates both the command input shaping and the sliding mode output feedback control (SMOFC) techniques. Specifically, the input shaper is designed for the reference model and implemented outside of the feedback loop in order to achieve the exact elimination of the residual vibration by modifying the existing command. The feedback controller, on the other hand, is designed based on the SMOFC such that the closed-loop system behaves like the reference model with input shaper, where the residual vibrations are eliminated in the presence of parametric uncertainties and external disturbances. An attractive feature of this SMOFC algorithm is that the parametric uncertainties or external disturbances of the system do not need to satisfy the so-called matching conditions or invariance conditions provided that certain bounds are known. In addition, a smoothed hyperbolic tangent function is introduced to eliminate the chattering phenomenon. Compared with the conventional methods, the proposed scheme guarantees not only the stability of the closed-loop system, but also the good performance as well as the robustness. Simulation results for the spacecraft model show that the precise attitudes control and vibration suppression are successfully achieved.", "title": "" }, { "docid": "4bcfc77dabf9c0545fb28059a6df40c8", "text": "Over the past decade, machine learning techniques have made substantial advances in many domains. In health care, global interest in the potential of machine learning has increased; for example, a deep learning algorithm has shown high accuracy in detecting diabetic retinopathy.1 There have been suggestions that machine learning will drive changes in health care within a few years, specifically in medical disciplines that require more accurate prognostic models (eg, oncology) and those based on pattern recognition (eg, radiology and pathology). However, comparative studies on the effectiveness of machine learning–based decision support systems (ML-DSS) in medicine are lacking, especially regarding the effects on health outcomes. Moreover, the introduction of new technologies in health care has not always been straightforward or without unintended and adverse effects.2 In this Viewpoint we consider the potential unintended consequences that may result from the application of ML-DSS in clinical practice.", "title": "" }, { "docid": "387c2b51fcac3c4f822ae337cf2d3f8d", "text": "This paper directly follows and extends, where a novel method for measurement of extreme impedances is described theoretically. In this paper experiments proving that the method can significantly improve stability of a measurement system are described. Using Agilent PNA E8364A vector network analyzer (VNA) the method is able to measure reflection coefficient with stability improved 36-times in magnitude and 354-times in phase compared to the classical method of reflection coefficient measurement. Further, validity of the error model and related equations stated in are verified by real measurement of SMD resistors (size 0603) in microwave test fixture. Values of the measured SMD resistors range from 12 kOmega up to 330 kOmega. A novel calibration technique using three different resistors as calibration standards is used. The measured values of impedances reasonably agree with assumed values.", "title": "" }, { "docid": "355349959440aefc51e3c4ad98cb4697", "text": "Article history: Received 23 May 2008 Accepted 10 June 2009 Available online 18 August 2009", "title": "" }, { "docid": "f5d58660137891111a009bc841950ad2", "text": "Lateral brow ptosis is a common aging phenomenon, contributing to the lateral upper eyelid hooding, in addition to dermatochalasis. Lateral brow lift complements upper blepharoplasty in achieving a youthful periorbital appearance. In this study, the author reports his experience in utilizing a temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia. A retrospective analysis of all patients undergoing the proposed technique by one surgeon from 2009 to 2016 was conducted. Additional procedures were recorded. Preoperative and postoperative photographs at the longest follow-up visit were used for analysis. Operation was performed under local anesthesia. The surgical technique included a temporal (pretrichial) incision with subcutaneous dissection toward the lateral brow, with superolateral lift and closure. Total of 45 patients (44 females, 1 male; mean age: 58 years) underwent the temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia in office setting. The procedure was unilateral in 4 cases. Additional procedures included upper blepharoplasty (38), ptosis surgery (16), and lower blepharoplasty (24). Average follow-up time was 1 year (range, 6 months to 5 years). All patients were satisfied with the eyebrow contour and scar appearance. One patient required additional brow lift on one side for asymmetry. There were no cases of frontal nerve paralysis. In conclusion, the temporal (pretrichial) subcutaneous approach is an effective, safe technique for lateral brow lift/contouring, which can be performed under local anesthesia. It is ideal for women. Additional advantages include ease of operation, cost, and shortening the hairline (if necessary).", "title": "" }, { "docid": "1fba9ed825604e8afde8459a3d3dc0c0", "text": "Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a \"learning via translation\" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.", "title": "" }, { "docid": "3caa8fc1ea07fcf8442705c3b0f775c5", "text": "Recent research in the field of computational social science have shown how data resulting from the widespread adoption and use of social media channels such as twitter can be used to predict outcomes such as movie revenues, election winners, localized moods, and epidemic outbreaks. Underlying assumptions for this research stream on predictive analytics are that social media actions such as tweeting, liking, commenting and rating are proxies for user/consumer's attention to a particular object/product and that the shared digital artefact that is persistent can create social influence. In this paper, we demonstrate how social media data from twitter can be used to predict the sales of iPhones. Based on a conceptual model of social data consisting of social graph (actors, actions, activities, and artefacts) and social text (topics, keywords, pronouns, and sentiments), we develop and evaluate a linear regression model that transforms iPhone tweets into a prediction of the quarterly iPhone sales with an average error close to the established prediction models from investment banks. This strong correlation between iPhone tweets and iPhone sales becomes marginally stronger after incorporating sentiments of tweets. We discuss the findings and conclude with implications for predictive analytics with big social data.", "title": "" }, { "docid": "5aa593b6535869cacde9b0956cccc09a", "text": "This review reports on the effects of hypoxia on human skeletal muscle tissue. It was hypothesized in early reports that chronic hypoxia, as the main physiological stress during exposure to altitude, per se might positively affect muscle oxidative capacity and capillarity. However, it is now established that sustained exposure to severe hypoxia has detrimental effects on muscle structure. Short-term effects on skeletal muscle structure can readily be observed after 2 months of acute exposure of lowlanders to severe hypoxia, e.g. during typical mountaineering expeditions to the Himalayas. The full range of phenotypic malleability of muscle tissue is demonstrated in people living permanently at high altitude (e.g. at La Paz, 3600-4000 m). In addition, there is some evidence for genetic adaptations to hypoxia in high-altitude populations such as Tibetans and Quechuas, who have been exposed to altitudes in excess of 3500 m for thousands of generations. The hallmark of muscle adaptation to hypoxia in all these cases is a decrease in muscle oxidative capacity concomitant with a decrease in aerobic work capacity. It is thought that local tissue hypoxia is an important adaptive stress for muscle tissue in exercise training, so these results seem contra-intuitive. Studies have therefore been conducted in which subjects were exposed to hypoxia only during exercise sessions. In this situation, the potentially negative effects of permanent hypoxic exposure and other confounding variables related to exposure to high altitude could be avoided. Training in hypoxia results, at the molecular level, in an upregulation of the regulatory subunit of hypoxia-inducible factor-1 (HIF-1). Possibly as a consequence of this upregulation of HIF-1, the levels mRNAs for myoglobin, for vascular endothelial growth factor and for glycolytic enzymes, such as phosphofructokinase, together with mitochondrial and capillary densities, increased in a hypoxia-dependent manner. Functional analyses revealed positive effects on V(O(2)max) (when measured at altitude) on maximal power output and on lean body mass. In addition to the positive effects of hypoxia training on athletic performance, there is some recent indication that hypoxia training has a positive effect on the risk factors for cardiovascular disease.", "title": "" }, { "docid": "74227709f4832c3978a21abb9449203b", "text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.", "title": "" }, { "docid": "ae534b0d19b95dcee87f06ed279fc716", "text": "In this paper, comparative study of p type and n type solar cells are described using two popular solar cell analyzing software AFORS HET and PC1D. We use SiNx layer as Antireflection Coating and a passivated layer Al2O3 .The variation of reflection, absorption, I-V characteristics, and internal and external quantum efficiency have been done by changing the thickness of passivated layer and ARC layer, and front and back surface recombination velocities. The same analysis is taken by imposing surface charge at front of n-type solar Cell and we get 20.13%-20.15% conversion efficiency.", "title": "" }, { "docid": "1f7d0ccae4e9f0078eabb9d75d1a8984", "text": "A social network is composed by communities of individuals or organizations that are connected by a common interest. Online social networking sites like Twitter, Facebook and Orkut are among the most visited sites in the Internet. Presently, there is a great interest in trying to understand the complexities of this type of network from both theoretical and applied point of view. The understanding of these social network graphs is important to improve the current social network systems, and also to develop new applications. Here, we propose a friend recommendation system for social network based on the topology of the network graphs. The topology of network that connects a user to his friends is examined and a local social network called Oro-Aro is used in the experiments. We developed an algorithm that analyses the sub-graph composed by a user and all the others connected people separately by three degree of separation. However, only users separated by two degree of separation are candidates to be suggested as a friend. The algorithm uses the patterns defined by their connections to find those users who have similar behavior as the root user. The recommendation mechanism was developed based on the characterization and analyses of the network formed by the user's friends and friends-of-friends (FOF).", "title": "" }, { "docid": "030f53c532e6989b12c8c2199d6bd5ac", "text": "Instagram is a popular social networking application that allows users to express themselves through the uploaded content and the different filters they can apply. In this study we look at personality prediction from Instagram picture features. We explore two different features that can be extracted from pictures: 1) visual features (e.g., hue, valence, saturation), and 2) content features (i.e., the content of the pictures). To collect data, we conducted an online survey where we asked participants to fill in a personality questionnaire and grant us access to their Instagram account through the Instagram API. We gathered 54,962 pictures of 193 Instagram users. With our results we show that visual and content features can be used to predict personality from and perform in general equally well. Combining the two however does not result in an increased predictive power. Seemingly, they are not adding more value than they already consist of independently.", "title": "" }, { "docid": "ab156ab101063353a64bbcd51e47b88f", "text": "Spontaneous lens absorption (SLA) is a rare complication of hypermature cataract. However, this condition has been reported in several cases of hypermature cataracts that were caused by trauma, senility, uveitic disorders such as Fuchs’ uveitis syndrome (FUS), and infectious disorders including leptospirosis and rubella. We report a case of spontaneous absorption of a hypermature cataract secondary to FUS. To our knowledge, this is the first report of SLA that was followed by dislocation of the capsular remnants into the vitreous and resulted in a misdiagnosis as crystalline lens luxation.", "title": "" }, { "docid": "1eea111c3efcc67fcc1bb6f358622475", "text": "Methyl Cellosolve (the monomethyl ether of ethylene glycol) has been widely used as the organic solvent in ninhydrin reagents for amino acid analysis; it has, however, properties that are disadvantageous in a reagent for everyday employment. The solvent is toxic and it is difficult to keep the ether peroxide-free. A continuing effort to arrive at a chemically preferable and relatively nontoxic substitute for methyl Cellosolve has led to experiments with dimethyl s&oxide, which proves to be a better solvent for the reduced form of ninhydrin (hydrindantin) than is methyl Cellosolve. Dimethyl sulfoxide can replace the latter, volume for volume, in a ninhydrin reagent mixture that gives equal performance and has improved stability. The result is a ninhydrin-hydrindantin solution in 75% dimethyl sulfoxide25 % 4 M lithium acetate buffer at pH 5.2. This type of mixture, with appropriate hydrindantin concentrations, is recommended to replace methyl Cellosolve-containing reagents in the quantitative determination of amino acids by automatic analyzers and by the manual ninhydrin method.", "title": "" }, { "docid": "4fb62f06132119cb396e7f21a47d8682", "text": "It has long been an important issue in various disciplines to examine massive multidimensional data superimposed by a high level of noises and interferences by extracting the embedded multi-way factors. With the quick increases of data scales and dimensions in the big data era, research challenges arise in order to (1) reflect the dynamics of large tensors while introducing no significant distortions in the factorization procedure and (2) handle influences of the noises in sophisticated applications. A hierarchical parallel processing framework over a GPU cluster, namely H-PARAFAC, has been developed to enable scalable factorization of large tensors upon a “divide-and-conquer” theory for Parallel Factor Analysis (PARAFAC). The H-PARAFAC framework incorporates a coarse-grained model for coordinating the processing of sub-tensors and a fine-grained parallel model for computing each sub-tensor and fusing sub-factors. Experimental results indicate that (1) the proposed method breaks the limitation on the scale of multidimensional data to be factorized and dramatically outperforms the traditional counterparts in terms of both scalability and efficiency, e.g., the runtime increases in the order of <inline-formula> <tex-math notation=\"LaTeX\">$n^2$</tex-math><alternatives><inline-graphic xlink:href=\"wang-ieq1-2613054.gif\"/> </alternatives></inline-formula> when the data volume increases in the order of <inline-formula> <tex-math notation=\"LaTeX\">$n^3$</tex-math><alternatives><inline-graphic xlink:href=\"wang-ieq2-2613054.gif\"/> </alternatives></inline-formula>, (2) H-PARAFAC has potentials in refraining the influences of significant noises, and (3) H-PARAFAC is far superior to the conventional window-based counterparts in preserving the features of multiple modes of large tensors.", "title": "" }, { "docid": "198b084248ea03fb1398df036db800bf", "text": "Assistive technology (AT) is defined in this paper as ‘any device or system that allows an individual to perform a task that they would otherwise be unable to do, or increases the ease and safety with which the task can be performed’ (Cowan and Turner-Smith 1999). Its importance in contributing to older people’s independence and autonomy is increasingly recognised, but there has been little research into the viability of extensive installations of AT. This paper focuses on the acceptability of AT to older people, and reports one component of a multidisciplinary research project that examined the feasibility, acceptability, costs and outcomes of introducing AT into their homes. Sixty-seven people aged 70 or more years were interviewed in-depth during 2001 to find out about their use and experience of a wide range of assistive technologies. The findings suggest a complex model of acceptability, in which a ‘ felt need’ for assistance combines with ‘product quality ’. The paper concludes by considering the tensions that may arise in the delivery of acceptable assistive technology.", "title": "" }, { "docid": "350ad8891cfed8e01aab3b70be20377b", "text": "The university course timetabling problem is a combinatorial optimisation problem in which a set of events has to be scheduled in time slots and located in suitable rooms. The design of course timetables for academic institutions is a very difficult task because it is an NP-hard problem. This paper proposes a genetic algorithm with a guided search strategy and a local search technique for the university course timetabling problem. The guided search strategy is used to create offspring into the population based on a data structure that stores information extracted from previous good individuals. The local search technique is used to improve the quality of individuals. The proposed genetic algorithm is tested on a set of benchmark problems in comparison with a set of state-of-the-art methods from the literature. The experimental results show that the proposed genetic algorithm is able to produce promising results for the university course timetabling problem.", "title": "" } ]
scidocsrr
1062f411157d09730649fb8f1ce256b6
Strategic IT alignment: twenty-five years on
[ { "docid": "645f320514b0fa5a8b122c4635bc3df6", "text": "A critical decision problem for top management, and the focus of this study, is whether the CEO (chief executive officer) and CIO (chief information officer) should commit their time to formal planning with the expectation of producing an information technology (IT)-based competitive advantage. Using the perspective of the resource-based view, a model is presented that examines how strategic IT alignment can produce enhanced organizational strategies that yield competitive advantage. One hundred sixty-one CIOs provided data using a postal survey. Results supported seven of the eight hypotheses. They showed that information intensity is an important antecedent to strategic IT alignment, that strategic IT alignment is best explained by multiple constructs which operationalize both process and content measures, and that alignment between the IT plan and the business plan is significantly related to the use of IT for competitive advantage. Study results raise questions about the effect of CEO participation, which appears to be the weak link in the process, and also about the perception of the CIO on the importance of CEO involvement. The paper contributes to our understanding of how knowledge sharing in the alignment process contributes to the creation of superior organizational strategies, provides a framework of the alignment-performance relationship, and furnishes several new constructs. Subject Areas: Competitive Advantage, Information Systems Planning, Knowledge Sharing, Resource-Based View, Strategic Planning, and Structural Equation Modeling.", "title": "" } ]
[ { "docid": "110742230132649f178d2fa99c8ffade", "text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.", "title": "" }, { "docid": "660a14e0b194621898d0492b6db3ea09", "text": "Machine vision-based PCB defect inspection system is designed to meet high speed and high precision requirement in PCB manufacture industry field, which is the combination of software and hardware. This paper firstly introduced the whole system structure and the principle of vision detection, while described the relevant key technologies used during the PCB defect inspection, finally implemented one set of test system with the key technologies mentioned. The experimental results show that the defect of PCB can be effectively inspected, located and recognized with the key technologies.", "title": "" }, { "docid": "dc98ddb6033ca1066f9b0ba5347a3d0c", "text": "Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. The result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. This work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.", "title": "" }, { "docid": "0459d9b635da3a6defe768427ef20834", "text": "Matrix factorization (MF) is used by many popular algorithms such as collaborative filtering. GPU with massive cores and high memory bandwidth sheds light on accelerating MF much further when appropriately exploiting its architectural characteristics.\n This paper presents cuMF, a CUDA-based matrix factorization library that optimizes alternate least square (ALS) method to solve very large-scale MF. CuMF uses a set of techniques to maximize the performance on single and multiple GPUs. These techniques include smart access of sparse data leveraging GPU memory hierarchy, using data parallelism in conjunction with model parallelism, minimizing the communication overhead among GPUs, and a novel topology-aware parallel reduction scheme.\n With only a single machine with four Nvidia GPU cards, cuMF can be 6-10 times as fast, and 33-100 times as cost-efficient, compared with the state-of-art distributed CPU solutions. Moreover, cuMF can solve the largest matrix factorization problem ever reported in current literature, with impressively good performance.", "title": "" }, { "docid": "f7d023abf0f651177497ae38d8494efc", "text": "Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.", "title": "" }, { "docid": "cc61cf5de5445258a1dbb9a052821add", "text": "In healthcare systems, there is huge medical data collected from many medical tests which conducted in many domains. Much research has been done to generate knowledge from medical data by using data mining techniques. However, there still needs to extract hidden information in the medical data, which can help in detecting diseases in the early stage or even before happening. In this study, we apply three data mining classifiers; Decision Tree, Rule Induction, and Naïve Bayes, on a test blood dataset which has been collected from Europe Gaza Hospital, Gaza Strip. The classifiers utilize the CBC characteristics to predict information about possible blood diseases in early stage, which may enhance the curing ability. Three experiments are conducted on the test blood dataset, which contains three types of blood diseases; Hematology Adult, Hematology Children and Tumor. The results show that Naïve Bayes classifier has the ability to predict the Tumor of blood disease better than the other two classifiers with accuracy of 56%, Rule induction classifier gives better result in predicting Hematology (Adult, Children) with accuracy of (57%–67%) respectively, while Decision Tree has the Lowest accuracy rate for detecting the three types of diseases in our dataset.", "title": "" }, { "docid": "8ed122ede076474bdad5c8fa2c8fd290", "text": "Faced with changing markets and tougher competition, more and more companies realize that to compete effectively they must transform how they function. But while senior managers understand the necessity of change, they often misunderstand what it takes to bring it about. They assume that corporate renewal is the product of company-wide change programs and that in order to transform employee behavior, they must alter a company's formal structure and systems. Both these assumptions are wrong, say these authors. Using examples drawn from their four-year study of organizational change at six large corporations, they argue that change programs are, in fact, the greatest obstacle to successful revitalization and that formal structures and systems are the last thing a company should change, not the first. The most successful change efforts begin at the periphery of a corporation, in a single plant or division. Such efforts are led by general managers, not the CEO or corporate staff people. And these general managers concentrate not on changing formal structures and systems but on creating ad hoc organizational arrangements to solve concrete business problems. This focuses energy for change on the work itself, not on abstractions such as \"participation\" or \"culture.\" Once general managers understand the importance of this grass-roots approach to change, they don't have to wait for senior management to start a process of corporate renewal. The authors describe a six-step change process they call the \"critical path.\"", "title": "" }, { "docid": "ef52c7d4c56ff47c8e18b42e0a757655", "text": "Microprocessors and memory systems suffer from a growing gap in performance. We introduce Active Pages, a computation model which addresses this gap by shifting data-intensive computations to the memory system. An Active Page consists of a page of data and a set of associated functions which can operate upon that data. We describe an implementation of Active Pages on RADram (Reconfigurable Architecture DRAM), a memory system based upon the integration of DRAM and reconfigurable logic. Results from the SimpleScalar simulator [BA97] demonstrate up to 1000X speedups on several applications using the RADram system versus conventional memory systems. We also explore the sensitivity of our results to implementations in other memory technologies.", "title": "" }, { "docid": "0687cc3d9df74b2ff1dd94d55b773493", "text": "What should I wear? We present Magic Mirror, a virtual fashion consultant, which can parse, appreciate and recommend the wearing. Magic Mirror is designed with a large display and Kinect to simulate the real mirror and interact with users in augmented reality. Internally, Magic Mirror is a practical appreciation system for automatic aesthetics-oriented clothing analysis. Specifically, we focus on the clothing collocation rather than the single one, the style (aesthetic words) rather than the visual features. We bridge the gap between the visual features and aesthetic words of clothing collocation to enable the computer to learn appreciating the clothing collocation. Finally, both object and subject evaluations verify the effectiveness of the proposed algorithm and Magic Mirror system.", "title": "" }, { "docid": "4d400d084e9eb14fe44b9a9b26a0e739", "text": "Various image editing tools make our pictures more attractive, and at the same time, evoke different emotional responses. With powerful and easy-to-use imaging applications, capturing, editing and then sharing pictures have become daily life for many. This paper investigates the influence of several image manipulations on evoked emotions for different types of images. To do so, various types of images clustered in different categories, were collected from Instagram and subjective evaluations were conducted via crowdsourcing to gather the emotional responses on different manipulations as perceived by subjects. Evaluation results show that certain image manipulations can induce different evoked emotions on transformed pictures when compared to the original ones. However, such changes in image emotions due to manipulation are highly content dependent. Then, we conducted a machine learning based experiment, in attempt to predict the emotions of a manipulated image given its original version and the desired manipulation method. Experimental results present a promising performance of such a prediction model, which could pave the road to automatic selection or recommendation of image editing tools that can efficiently transform or emphasize desired emotions in pictures. Introduction Thanks to wide spread popularity of smart mobile devices with high-resolution cameras, as well as user-friendly imaging and social networking applications, taking pictures, then editing and sharing, have become part of everyday life for many. Photo sharing has been used as a way to share not only stories but also current moods with friends, family and public at large. Modern photo sharing applications equipped with advanced and easy-touse image editing tools, such as Instagram, provide consumers with very convenient solutions to make their pictures more attractive, and more importantly, to arouse stronger emotional resonances. Different types of image content generate different emotions. Using different photographic techniques, visual filters or editing tools, pictures of the same scene can also evoke different emotions. Motivated by these facts, we attempt to change an original picture’s evoked emotion and transform it to new emotions (stronger, weaker, or completely different) by image manipulation. To achieve this goal, we first need to understand the emotional responses evoked by different image manipulations when applied to pictures. This paper investigates the influence of image manipulations on evoked emotions, and tries to find the potential pattern between image manipulation and generated emotions. To do so, we conducted subjective experiments based on online crowdsourcing. Different types of images were collected from Instagram, and manipulated by a number of typical image editing tools. Crowdsourcing subjects were then exposed to each, and questioned regarding the emotions pictures induced on them. Using the crowdsourced data as groundtruth, we trained and evaluated a model based on machine learning for predicting evoked emotions, taking an original image and desired manipulation as input. The rest of the paper is structured as follows. The next section introduces the related works by other researchers, followed by a section describing the data collection and user study. Then we analyze and interpret emotional responses obtained from subjects, and report the experiments of emotion prediction upon image manipulation in the followed two sections. Finally, the last section concludes the paper and discusses future work. Prior Work Image aesthetic quality estimation, emotion recognition and classification have been largely studied in the field of computer vision [1, 2, 3, 4, 5]. Most previous works use image features for affective image classification and emotion prediction [2, 3, 6, 7, 5]. Such features include color, texture, composition, edge and semantic information. A few researchers have worked on transforming image emotions by editing images. In [8], Wang et al. associate color themes with emotion keywords depending on art theory and transform the color theme of an input image to the desired one. However, in their work, only a few cartoon-like images are used. Peng et al. [9] propose a framework to change an image’s emotion by randomly sampling from a set of possible target images, but only show a few examples. Jun et al. [10] show that changing brightness and contrast of an image can affect the pleasure and excitement felt by observers. However, only a limited variation of an input image can be produced by changing the two features. Peng et al. [11] change the color tone and texture related features of an image to transfer the evoked emotion distribution, with experiments conducted on only limited types of image content. Evaluating image’s evoked emotions after image manipulation is not a trivial task. Many well-established image manipulation and editing tools have been widely used in online photo sharing and social networks, as ways for users to enhance their image content either to draw better attention or to evoke stronger emotions. Popular image editing tools include image enhancement [12], grayscale conversion, vintage processing, cartoonizing [13], and more recently addition of stickers1 [14]. However, most image manipulation methods have been studied merely from the perspective of image processing and not so much on their emotional impact. 1https://www.facebook.com/help/1597631423793468 (a) Original (b) Cartoon (c) Emoji (d) Enhance (e) Halo (f) Gray (g) Grunge (h) Old paper Figure 1. Example image manipulated by different methods. Several affective image databases have been created in previous works, including artistic photos or abstract paintings used in [2], International Affective Picture System (IAPS) [15], The Geneva affective picture database (GAPED) [16] and Emotion6 [11]. In our research, we are more interested in the emotions of everyday photographs, especially those images that are widely shared by online users. Unfortunately, most existing affective image datasets contain either extremely emotional images, or images without much natural high-level semantic features like human face. All those types of images do not fit our requirements. Therefore we decided to collect our own dataset using Instagram, one of the most popular online photo sharing services. To measure emotions, different types of models have been designed by psychologists. One of the most popular is the valence-arousal (VA) model (proposed by Russell [17]), characterizing emotions in two dimensions, where valence measures attractiveness in a scale from positive to negative, while arousal indicates the degree of excitement or stimulation. In terms of categorization of emotions, Ekman’s six basic emotions (anger, disgust, fear, joy, sadness and surprise) [18] are widely known. In our work, we used both models similar to works in [16, 11]. Image Dataset and User Study This section describes in detail the image dataset creation and crowdsourcing experiment. Image Collection and Processing We collected images from Instagram. According to a previous study by Hu et al. [19], images shared within Instagram can be classified into the following eight basic categories in terms of their content: Friends, Food, Gadget, Captioned photo, Pet, Activity, Selfie and Fashion. Therefore, we collected image dataset by searching for the eight category keywords or their synonyms via Instagram #tag. This was mainly motivated in order to have a wider variety of image content. At the end 13 color images were selected manually for each category resulting in 104 images in total. All selected images have the same size of 640×640 pixels. For each image, seven different manipulations were applied to create different visual effects. We will refer to these manipulations as the following names: • Cartoon: Applies a cartoon effect to an image. • Emoji: Adds an Emoji on top-right corner of an image. • Enhance: Applies brightness/contrast/colorization enhancement on an image via LAB colorspace. • Halo: Applies a circular halo effect to an image. • Gray: Converts an image to gray scale. • Grunge: Applies a classic vintage effect with a grunge background to an image. • Old paper: Applies another heritage style vintage effect with an old paper background to an image. The reason of selecting the seven particular manipulations is that the changes of an image caused by these operations cover different aspects of image information, e.g. color, texture, composition, and higher-level image semantics. The emoji sticker “Tear of Joy” was selected as it has been in the top 10 most popular emojis on Emojipedia for all of 20152, and the emotion it expresses is not that obvious. The seven manipulations were implemented by using ImageMagick software3. An example image processed by the 7 different manipulations is illustrated in Figure 1. Summing up, a grand total of 832 (104× 8) images were generated, including the original versions of each image. The image dataset is publicly accessible at http://mmspg.epfl.ch/ emotion-image-datasets. User Study We used Microworkers4 platform to collect emotional responses from subjects. A questionnaire was designed where four emotion-related questions are asked for each image. The first two questions are about the valence and arousal ratings respectively, where a 9-point scale was used, same as [11, 15]. For valence, 1, 5, and 9 mean very negative, neutral, and very positive emotions respectively, in terms of attractiveness. For arousal, 1 and 9 mean emotions with very low and very high stimulating effects respectively. In the questionnaire, instead of directly asking subjects to provide VA scores, questions were rephrased to be similar as in [11]. The third question is about the emotion distribution of the image, based on Ekman’s six basic emotions [18]. Similar to [11], 7 emotion keywords (Ekman’s six basic emotions and “Neutral”) were used and subjects wer", "title": "" }, { "docid": "4b9c5c1851909ae31c4510f47cb61a60", "text": "Fraud has been very common in our society, and it affects private enterprises as well as public entities. However, in recent years, the development of new technologies has also provided criminals more sophisticated way to commit fraud and it therefore requires more advanced techniques to detect and prevent such events. The types of fraud in Telecommunication industry includes: Subscription Fraud, Clip on Fraud, Call Forwarding, Cloning Fraud, Roaming Fraud, and Calling Card. Thus, detection and prevention of these frauds is one of the main objectives of the telecommunication industry. In this research, we developed a model that detects fraud in Telecommunication sector in which a random rough subspace based neural network ensemble method was employed in the development of the model to detect subscription fraud in mobile telecoms. This study therefore presents the development of patterns that illustrate the customers’ subscription's behaviour focusing on the identification of non-payment events. This information interrelated with other features produces the rules that lead to the predictions as earlier as possible to prevent the revenue loss for the company by deployment of the appropriate actions.", "title": "" }, { "docid": "3029cfa2951d50880e439205a12a5629", "text": "Neuroevolutionary algorithms are successful methods for optimizing neural networks, especially for learning a neural policy (controller) in reinforcement learning tasks. Their significant advantage over gradient-based algorithms is the capability to search network topology as well as connection weights. However, state-of-the-art topology evolving methods are known to be inefficient compared to weight evolving methods with an appropriately hand-tuned topology. This paper introduces a novel efficient algorithm called CMA-TWEANN for evolving both topology and weights. Its high efficiency is achieved by introducing efficient topological mutation operators and integrating a state-of-the-art function optimization algorithm for weight optimization. Experiments on benchmark reinforcement learning tasks demonstrate that CMA-TWEANN solves tasks significantly faster than existing topology evolving methods. Furthermore, it outperforms weight evolving techniques even when they are equipped with a hand-tuned topology. Additional experiments reveal how and why CMA-TWEANN is the best performing weight evolving method.", "title": "" }, { "docid": "a1b3289280bab5a58ef3b23632e01f5b", "text": "Current devices have limited battery life, typically lasting less than one day. This can lead to situations where critical tasks, such as making an emergency phone call, are not possible. Other devices, supporting different functionality, may have sufficient battery life to enable this task. We present PowerShake; an exploration of power as a shareable commodity between mobile (and wearable) devices. PowerShake enables users to control the balance of power levels in their own devices (intra-personal transactions) and to trade power with others (inter-personal transactions) according to their ongoing usage requirements. This paper demonstrates Wireless Power Transfer (WPT) between mobile devices. PowerShake is: simple to perform on-the-go; supports ongoing/continuous tasks (transferring at ~3.1W); fits in a small form factor; and is compliant with electromagnetic safety guidelines while providing charging efficiency similar to other standards (48.2% vs. 51.2% in Qi). Based on our proposed technical implementation, we run a series of workshops to derive candidate designs for PowerShake enabled devices and interactions, and to bring to light the social implications of power as a tradable asset.", "title": "" }, { "docid": "c10a83c838f59adeb50608d5b96c0fbc", "text": "Robots are typically equipped with multiple complementary sensors such as cameras and laser range finders. Camera generally provides dense 2D information while range sensors give sparse and accurate depth information in the form of a set of 3D points. In order to represent the different data sources in a common coordinate system, extrinsic calibration is needed. This paper presents a pipeline for extrinsic calibration a zed setero camera with Velodyne LiDAR puck using a novel self-made 3D marker whose edges can be robustly detected in the image and 3d point cloud. Our approach first estimate the large sensor displacement using just a single frame. then we optimize the coarse results by finding the best align of edges in order to obtain a more accurate calibration. Finally, the ratio of the 3D points correctly projected onto proper image segments is used to evaluate the accuracy of calibration.", "title": "" }, { "docid": "759207b77a14edb08b81cbd53def9960", "text": "Computer Aided Design (CAD) typically involves tasks such as adjusting the camera perspective and assembling pieces in free space that require specifying 6 degrees of freedom (DOF). The standard approach is to factor these DOFs into 2D subspaces that are mapped to the x and y axes of a mouse. This metaphor is inherently modal because one needs to switch between subspaces, and disconnects the input space from the modeling space. In this paper, we propose a bimanual hand tracking system that provides physically-motivated 6-DOF control for 3D assembly. First, we discuss a set of principles that guide the design of our precise, easy-to-use, and comfortable-to-use system. Based on these guidelines, we describe a 3D input metaphor that supports constraint specification classically used in CAD software, is based on only a few simple gestures, lets users rest their elbows on their desk, and works alongside the keyboard and mouse. Our approach uses two consumer-grade webcams to observe the user's hands. We solve the pose estimation problem with efficient queries of a precomputed database that relates hand silhouettes to their 3D configuration. We demonstrate efficient 3D mechanical assembly of several CAD models using our hand-tracking system.", "title": "" }, { "docid": "d8839a4ee6afb89a49d807861f8d3a08", "text": "Single-phase photovoltaic (PV) energy conversion systems are the main solution for small-scale rooftop PV applications. Some multilevel topologies have been commercialized for PV systems and an they are attractive alternative to implement small-scale rooftop PV applications. Efficiency, reliability, power quality and power losses are important concepts to consider in PV converters. For this reason this paper presents a comparison of four multilevel converter based in the T-type topology proposed by Conergy. The presented control scheme is based in single-phase voltage oriented control and simulation results are presented to provide a preliminary validation of each topology. Finally a summary table with the different features of the converters is provided.", "title": "" }, { "docid": "d411b5b732f9d7eec4fc065bc410ae1b", "text": "What do you do to start reading robot hands and the mechanics of manipulation? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this robot hands and the mechanics of manipulation.", "title": "" }, { "docid": "ca655b741316e8c65b6b7590833396e1", "text": "• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" }, { "docid": "c6478751e51c295811c9994f734f1336", "text": "Optical character recognition (OCR) has made great progress in recent years due to the introduction of recognition engines based on recurrent neural networks, in particular the LSTM architecture. This paper describes a new, open-source line recognizer combining deep convolutional networks and LSTMs, implemented in PyTorch and using CUDA kernels for speed. Experimental results are given comparing the performance of different combinations of geometric normalization, 1D LSTM, deep convolutional networks, and 2D LSTM networks. An important result is that while deep hybrid networks without geometric text line normalization outperform 1D LSTM networks with geometric normalization, deep hybrid networks with geometric text line normalization still outperform all other networks. The best networks achieve a throughput of more than 100 lines per second and test set error rates on UW3 of 0.25%.", "title": "" }, { "docid": "2491e79213708cb5fd8e17b3565a7e90", "text": "xi CHAPTER 1. LEARNING CLASSIFIERS FROM LINKED DATA: AN OVERVIEW 1 1.", "title": "" } ]
scidocsrr
def74128c98d3aa37421b11269383ce6
A Bootstrap Method for Automatic Rule Acquisition on Emotion Cause Extraction
[ { "docid": "ea7121fa37b2e41f202a042073c72c54", "text": "Sentiment analysis from text consists of extracting information about opinions, sentiments, and even emotions conveyed by writers towards topics of interest. It is often equated to opinion mining, but it should also encompass emotion mining. Opinion mining involves the use of natural language processing and machine learning to determine the attitude of a writer towards a subject. Emotion mining is also using similar technologies but is concerned with detecting and classifying writers emotions toward events or topics. Textual emotion-mining methods have various applications, including gaining information about customer satisfaction, helping in selecting teaching materials in e-learning, recommending products based on users emotions, and even predicting mental-health disorders. In surveys on sentiment analysis, which are often old or incomplete, the strong link between opinion mining and emotion mining is understated. This motivates the need for a different and new perspective on the literature on sentiment analysis, with a focus on emotion mining. We present the state-of-the-art methods and propose the following contributions: (1) a taxonomy of sentiment analysis; (2) a survey on polarity classification methods and resources, especially those related to emotion mining; (3) a complete survey on emotion theories and emotion-mining research; and (4) some useful resources, including lexicons and datasets.", "title": "" }, { "docid": "3bd77be05377f7c5bede15c276e3f856", "text": "In this paper, we propose a data-oriented method for inferring the emotion of a speaker conversing with a dialog system from the semantic content of an utterance. We first fully automatically obtain a huge collection of emotion-provoking event instances from the Web. With Japanese chosen as a target language, about 1.3 million emotion provoking event instances are extracted using an emotion lexicon and lexical patterns. We then decompose the emotion classification task into two sub-steps: sentiment polarity classification (coarsegrained emotion classification), and emotion classification (fine-grained emotion classification). For each subtask, the collection of emotion-proviking event instances is used as labelled examples to train a classifier. The results of our experiments indicate that our method significantly outperforms the baseline method. We also find that compared with the singlestep model, which applies the emotion classifier directly to inputs, our two-step model significantly reduces sentiment polarity errors, which are considered fatal errors in real dialog applications.", "title": "" }, { "docid": "54c6e02234ce1c0f188dcd0d5ee4f04c", "text": "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many di erent formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.", "title": "" }, { "docid": "51256458513e99bf3750049d542692b8", "text": "Text-level discourse parsing remains a challenge: most approaches employ features that fail to capture the intentional, semantic, and syntactic aspects that govern discourse coherence. In this paper, we propose a recursive model for discourse parsing that jointly models distributed representations for clauses, sentences, and entire discourses. The learned representations can to some extent learn the semantic and intentional import of words and larger discourse units automatically,. The proposed framework obtains comparable performance regarding standard discoursing parsing evaluations when compared against current state-of-art systems.", "title": "" } ]
[ { "docid": "b9538c45fc55caff8b423f6ecc1fe416", "text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.", "title": "" }, { "docid": "60c06e137f13c3fd1673feeb97d9e214", "text": "BACKGROUND\nAnimal-assisted therapy (AAT) is claimed to have a variety of benefits, but almost all published results are anecdotal. We characterized the resident population in long-term care facilities desiring AAT and determined whether AAT can objectively improve loneliness.\n\n\nMETHODS\nOf 62 residents, 45 met inclusion criteria for the study. These 45 residents were administered the Demographic and Pet History Questionnaire (DPHQ) and Version 3 of the UCLA Loneliness Scale (UCLA-LS). They were then randomized into three groups (no AAT; AAT once/week; AAT three times/week; n = 15/group) and retested with the UCLA-LS near the end of the 6-week study.\n\n\nRESULTS\nUse of the DPHQ showed residents volunteering for the study had a strong life-history of emotional intimacy with pets and wished that they currently had a pet. AAT was shown by analysis of covariance followed by pairwise comparison to have significantly reduced loneliness scores in comparison with the no AAT group.\n\n\nCONCLUSIONS\nThe desire for AAT strongly correlates with previous pet ownership. AAT reduces loneliness in residents of long-term care facilities.", "title": "" }, { "docid": "f1a7bcd681969d5a5167d1b0397af13a", "text": "The most data-efficient algorithms for reinforcement learning (RL) in robotics are based on uncertain dynamical models: after each episode, they first learn a dynamical model of the robot, then they use an optimization algorithm to find a policy that maximizes the expected return given the model and its uncertainties. It is often believed that this optimization can be tractable only if analytical, gradient-based algorithms are used; however, these algorithms require using specific families of reward functions and policies, which greatly limits the flexibility of the overall approach. In this paper, we introduce a novel model-based RL algorithm, called Black-DROPS (Black-box Data-efficient RObot Policy Search) that: (1) does not impose any constraint on the reward function or the policy (they are treated as black-boxes), (2) is as data-efficient as the state-of-the-art algorithm for data-efficient RL in robotics, and (3) is as fast (or faster) than analytical approaches when several cores are available. The key idea is to replace the gradient-based optimization algorithm with a parallel, black-box algorithm that takes into account the model uncertainties. We demonstrate the performance of our new algorithm on two standard control benchmark problems (in simulation) and a low-cost robotic manipulator (with a real robot).", "title": "" }, { "docid": "e457ab9e14f6fa104a15421d9263815a", "text": "Many aquaculture systems generate high amounts of wastewater containing compounds such as suspended solids, total nitrogen and total phosphorus. Today, aquaculture is imperative because fish demand is increasing. However, the load of waste is directly proportional to the fish production. Therefore, it is necessary to develop more intensive fish culture with efficient systems for wastewater treatment. A number of physical, chemical and biological methods used in conventional wastewater treatment have been applied in aquaculture systems. Constructed wetlands technology is becoming more and more important in recirculating aquaculture systems (RAS) because wetlands have proven to be well-established and a cost-effective method for treating wastewater. This review gives an overview about possibilities to avoid the pollution of water resources; it focuses initially on the use of systems combining aquaculture and plants with a historical review of aquaculture and the treatment of its effluents. It discusses the present state, taking into account the load of pollutants in wastewater such as nitrates and phosphates, and finishes with recommendations to prevent or at least reduce the pollution of water resources in the future.", "title": "" }, { "docid": "848b2645e26206cb77348a2a8389e8d0", "text": "Dimensionality reduction (DR) methods have been commonly u sed as a principled way to understand the high-dimensional d at such as facial images. In this paper, we propose a new supervi sed DR method called Optimized Projection for Sparse Repres ntation based Classification (OP-SRC), which is based on the recent f ace recognition method, Sparse Representation based Class ification (SRC). SRC seeks a sparse linear combination on all the train ing data for a given query image, and make the decision by the minimal reconstruction residual. OP-SRC is designed on the decision rule of SRC, it aims to reduce the within-class reco nstruction residual and simultaneously increase the between-class re construction residual on the training data. The projection s are optimized and match well with the mechanism of SRC. Therefore, SRC perf orms well in the OP-SRC transformed space. The feasibility a nd effectiveness of the proposed method is verified on the Yale, ORL and UMIST databases with promising results.", "title": "" }, { "docid": "2445b8d7618c051acd743f65ef6f588a", "text": "Recent developments in analysis methods on the non-linear and non-stationary data have received large attention by the image analysts. In 1998, Huang introduced the empirical mode decomposition (EMD) in signal processing. The EMD approach, fully unsupervised, proved reliable monodimensional (seismic and biomedical) signals. The main contribution of our approach is to apply the EMD to texture extraction and image filtering, which are widely recognized as a difficult and challenging computer vision problem. We developed an algorithm based on bidimensional empirical mode decomposition (BEMD) to extract features at multiple scales or spatial frequencies. These features, called intrinsic mode functions, are extracted by a sifting process. The bidimensional sifting process is realized using morphological operators to detect regional maxima and thanks to radial basis function for surface interpolation. The performance of the texture extraction algorithms, using BEMD method, is demonstrated in the experiment with both synthetic and natural images. q 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "22ab8eb2b8eaafb2ee72ea0ed7148ca4", "text": "As travel is taking more significant part in our life, route recommendation service becomes a big business and attracts many major players in IT industry. Given a pair of user-specified origin and destination, a route recommendation service aims to provide users with the routes of best travelling experience according to criteria, such as travelling distance, travelling time, traffic condition, etc. However, previous research shows that even the routes recommended by the big-thumb service providers can deviate significantly from the routes travelled by experienced drivers. It means travellers' preferences on route selection are influenced by many latent and dynamic factors that are hard to model exactly with pre-defined formulas. In this work we approach this challenging problem with a very different perspective- leveraging crowds' knowledge to improve the recommendation quality. In this light, CrowdPlanner - a novel crowd-based route recommendation system has been developed, which requests human workers to evaluate candidate routes recommended by different sources and methods, and determine the best route based on their feedbacks. In this paper, we particularly focus on two important issues that affect system performance significantly: (1) how to efficiently generate tasks which are simple to answer but possess sufficient information to derive user-preferred routes; and (2) how to quickly identify a set of appropriate domain experts to answer the questions timely and accurately. Specifically, the task generation component in our system generates a series of informative and concise questions with optimized ordering for a given candidate route set so that workers feel comfortable and easy to answer. In addition, the worker selection component utilizes a set of selection criteria and an efficient algorithm to find the most eligible workers to answer the questions with high accuracy. A prototype system has been deployed to many voluntary mobile clients and extensive tests on real-scenario queries have shown the superiority of CrowdPlanner in comparison with the results given by map services and popular route mining algorithms.", "title": "" }, { "docid": "a54f912c14b44fc458ed8de9e19a5e82", "text": "Musical training has recently gained additional interest in education as increasing neuroscientific research demonstrates its positive effects on brain development. Neuroimaging revealed plastic changes in the brains of adult musicians but it is still unclear to what extent they are the product of intensive music training rather than of other factors, such as preexisting biological markers of musicality. In this review, we synthesize a large body of studies demonstrating that benefits of musical training extend beyond the skills it directly aims to train and last well into adulthood. For example, children who undergo musical training have better verbal memory, second language pronunciation accuracy, reading ability and executive functions. Learning to play an instrument as a child may even predict academic performance and IQ in young adulthood. The degree of observed structural and functional adaptation in the brain correlates with intensity and duration of practice. Importantly, the effects on cognitive development depend on the timing of musical initiation due to sensitive periods during development, as well as on several other modulating variables. Notably, we point to motivation, reward and social context of musical education, which are important yet neglected factors affecting the long-term benefits of musical training. Further, we introduce the notion of rhythmic entrainment and suggest that it may represent a mechanism supporting learning and development of executive functions. It also hones temporal processing and orienting of attention in time that may underlie enhancements observed in reading and verbal memory. We conclude that musical training uniquely engenders near and far transfer effects, preparing a foundation for a range of skills, and thus fostering cognitive development.", "title": "" }, { "docid": "5ebddc1a0ce88499702deab9d57ccb62", "text": "Research into statistical parsing for English has enjoyed over a decade of successful results. However, adapting these models to other languages has met with difficulties. Previous comparative work has shown that Modern Arabic is one of the most difficult languages to parse due to rich morphology and free word order. Classical Arabic is the ancient form of Arabic, and is understudied in computational linguistics, relative to its worldwide reach as the language of the Quran. The thesis is based on seven publications that make significant contributions to knowledge relating to annotating and parsing Classical Arabic. Classical Arabic has been studied in depth by grammarians for over a thousand years using a traditional grammar known as i’rāb (ةاغعإ). Using this grammar to develop a representation for parsing is challenging, as it describes syntax using a hybrid of phrase-structure and dependency relations. This work aims to advance the state-of-the-art for hybrid parsing by introducing a formal representation for annotation and a resource for machine learning. The main contributions are the first treebank for Classical Arabic and the first statistical dependency-based parser in any language for ellipsis, dropped pronouns and hybrid representations. A central argument of this thesis is that using a hybrid representation closely aligned to traditional grammar leads to improved parsing for Arabic. To test this hypothesis, two approaches are compared. As a reference, a pure dependency parser is adapted using graph transformations, resulting in an 87.47% F1-score. This is compared to an integrated parsing model with an F1-score of 89.03%, demonstrating that joint dependency-constituency parsing is better suited to Classical Arabic. The Quran was chosen for annotation as a large body of work exists providing detailed syntactic analysis. Volunteer crowdsourcing is used for annotation in combination with expert supervision. A practical result of the annotation effort is the corpus website: http://corpus.quran.com, an educational resource with over two million users per year. ِيحِ ه رم ٱ نِػٰ َ حْْ ه رم ٱ ِ ه للَّ ٱ مِسْبِ ُيكِحَْمإ يُلِعَْمإ تَهٱَ مَه ه ِ إ اَنتَمْه لَع امَ ه لَ ِ إ اَنَم َ لْْعِ لََ مََهاحَبْ ُ س „Glory be to thee! We have no knowledge except what you have taught us. Indeed it is you who is the all-knowing, the all-wise.‟ A prayer of the angels –The Quran, verse (2:32)", "title": "" }, { "docid": "9e439c83f4c29b870b1716ceae5aa1f3", "text": "Suspension system plays an imperative role in retaining the continuous road wheel contact for better road holding. In this paper, fuzzy self-tuning of PID controller is designed to control of active suspension system for quarter car model. A fuzzy self-tuning is used to develop the optimal control gain for PID controller (proportional, integral, and derivative gains) to minimize suspension working space of the sprung mass and its change rate to achieve the best comfort of the driver. The results of active suspension system with fuzzy self-tuning PID controller are presented graphically and comparisons with the PID and passive system. It is found that, the effectiveness of using fuzzy self-tuning appears in the ability to tune the gain parameters of PID controller", "title": "" }, { "docid": "8b66fa77a75980128ac290036ea7c3c5", "text": "We mutated, by gene targeting, the endogenous hypoxanthine phosphorlbosyi transferase (HPFlT) gene in mouse embryo-derived stem (ES) cells. A specialized construct of the neomycin resistance (NO’) gene was introduced into an exon of a cloned fragment of the Hprf gene and used to transfect ES cells. Among the G418’ colonies, l/l000 were also resistant to the base analog &thioguanine (&TG). The G418’, 8-TGr ceils were ail shown to be Hprtas the result of homologous recombination with the exogenous, neo’-containing, Hprf sequences. We have compared the gene-targeting efficiencies of two classes of neo’-Hprf recombinant vectors: those that replace the endogenous sequence with the exogenous sequence and those that insert the exogenous sequence into the endogenous sequence. The targeting efficiencies of both classes of vectors are strongly dependent upon the extent of homology between exogenous and endogenous sequences. The protocol described herein should be useful for targeting mutations into any gene.", "title": "" }, { "docid": "f03749ebd15b51b95e8ece5d6d58108c", "text": "The holding of an infant with ventral skin-to-skin contact typically in an upright position with the swaddled infant on the chest of the parent, is commonly referred to as kangaroo care (KC), due to its simulation of marsupial care. It is recommended that KC, as a feasible, natural, and cost-effective intervention, should be standard of care in the delivery of quality health care for all infants, regardless of geographic location or economic status. Numerous benefits of its use have been reported related to mortality, physiological (thermoregulation, cardiorespiratory stability), behavioral (sleep, breastfeeding duration, and degree of exclusivity) domains, as an effective therapy to relieve procedural pain, and improved neurodevelopment. Yet despite these recommendations and a lack of negative research findings, adoption of KC as a routine clinical practice remains variable and underutilized. Furthermore, uncertainty remains as to whether continuous KC should be recommended in all settings or if there is a critical period of initiation, dose, or duration that is optimal. This review synthesizes current knowledge about the benefits of KC for infants born preterm, highlighting differences and similarities across low and higher resource countries and in a non-pain and pain context. Additionally, implementation considerations and unanswered questions for future research are addressed.", "title": "" }, { "docid": "eb1922f447ba14dede483e3ad6f17fd1", "text": "Gesture recognition is needed in many applications such as human-computer interaction and sign language recognition. The challenges of building an actual recognition system do not lie only in reaching an acceptable recognition accuracy but also with requirements for fast online processing. In this paper, we propose a method for online gesture recognition using RGB-D data from a Kinect sensor. Frame-level features are extracted from RGB frames and the skeletal model obtained from the depth data, and then classified by multiple extreme learning machines. The outputs from the classifiers are aggregated to provide the final classification results for the gestures. We test our method on the ChaLearn multi-modal gesture challenge data. The results of the experiments demonstrate that the method can perform effective multi-class gesture recognition in real-time.", "title": "" }, { "docid": "542117c3e27d15163b809a528952fb79", "text": "Predicting the gap between taxi demand and supply in taxi booking apps is completely new and important but challenging. However, manually mining gap rule for different conditions may become impractical because of massive and sparse taxi data. Existing works unilaterally consider demand or supply, used only few simple features and verified by little data, but not predict the gap value. Meanwhile, none of them dealing with missing values. In this paper, we introduce a Double Ensemble Gradient Boosting Decision Tree Model(DEGBDT) to predict taxi gap. (1) Our approach specifically considers demand and supply to predict the gap between them. (2) Also, our method provides a greedy feature ranking and selecting method to exploit most reliable feature. (3) To deal with missing value, our model takes the lead in proposing a double ensemble method, which secondarily integrates different Gradient Boosting Decision Tree(GBDT) model at the different data sparse situation. Experiments on real large-scale dataset demonstrate that our approach can effectively predict the taxi gap than state-of-the-art methods, and shows that double ensemble method is efficacious for sparse data.", "title": "" }, { "docid": "0d343c568f5f6ad396d5f1e5414f406a", "text": "Modern games often feature highly detailed urban environments. However, buildings are typically represented only by their façade, because of the excessive costs it would entail to manually model all interiors. Although automated procedural techniques for building interiors exist, their application is to date limited. One of the reasons for this is because designers have too little control over the resulting room topology. Also, generated floor plans do not always adhere to the necessary consistency constraints, such as reachability and connectivity. In this paper, we propose a novel and flexible technique for generating procedural floor plans subject to user-defined constraints. We demonstrate its versatility, by showing generated floor plans for different classes of buildings, some of which consist of several connected floors. It is concluded that this method results in plausible floor plans, over which a designer has control by defining functional constraints. Furthermore, the method is efficient and easy to implement and integrate into the larger context of procedural modeling of urban environments.", "title": "" }, { "docid": "127405febe57f4df6f8f16d42e0ac762", "text": "In the recent years there has been an increase in scientific papers publications in Albania and its neighboring countries that have large communities of Albanian speaking researchers. Many of these papers are written in Albanian. It is a very time consuming task to find papers related to the researchers’ work, because there is no concrete system that facilitates this process. In this paper we present the design of a modular intelligent search system for articles written in Albanian. The main part of it is the recommender module that facilitates searching by providing relevant articles to the users (in comparison with a given one). We used a cosine similarity based heuristics that differentiates the importance of term frequencies based on their location in the article. We did not notice big differences on the recommendation results when using different combinations of the importance factors of the keywords, title, abstract and body. We got similar results when using only theand body. We got similar results when using only the title and abstract in comparison with the other combinations. Because we got fairly good results in this initial approach, we believe that similar recommender systems for documents written in Albanian can be built also in contexts not related to scientific publishing. Keywords—recommender system; Albanian; information retrieval; intelligent search; digital library", "title": "" }, { "docid": "361dc8037ebc30cd2f37f4460cf43569", "text": "OVERVIEW: Next-generation semiconductor factories need to support miniaturization below 100 nm and have higher production efficiency, mainly of 300-mm-diameter wafers. Particularly to reduce the price of semiconductor devices, shorten development time [thereby reducing the TAT (turn-around time)], and support frequent product changeovers, semiconductor manufacturers must enhance the productivity of their systems. To meet these requirements, Hitachi proposes solutions that will support e-manufacturing on the next-generation semiconductor production line (see Fig. 1). Yasutsugu Usami Isao Kawata Hideyuki Yamamoto Hiroyoshi Mori Motoya Taniguchi, Dr. Eng.", "title": "" }, { "docid": "37e31e8e6173b77624494a235744c4a5", "text": "The extraction of multi-attribute objects from the deep web is the bridge between the unstructured web and structured data. Existing approaches either induce wrappers from a set of human-annotated pages or leverage repeated structures on the page without supervision. What the former lack in automation, the latter lack in accuracy. Thus accurate, automatic multi-attribute object extraction has remained an open challenge. AMBER overcomes both limitations through mutual supervision between the repeated structure and automatically produced annotations. Previous approaches based on automatic annotations have suffered from low quality due to the inherent noise in the annotations and have attempted to compensate by exploring multiple candidate wrappers. In contrast, AMBER compensates for this noise by integrating repeated structure analysis with annotation-based induction: The repeated structure limits the search space for wrapper induction, and conversely, annotations allow the repeated structure analysis to distinguish noise from relevant data. Both, low recall and low precision in the annotations are mitigated to achieve almost human quality (> 98%) multiattribute object extraction. To achieve this accuracy, AMBER needs to be trained once for an entire domain. AMBER bootstraps its training from a small, possibly noisy set of attribute instances and a few unannotated sites of the domain.", "title": "" }, { "docid": "d4eb3631b1cc8edd2f1eafe678d04a31", "text": "Social media being a prolific source of rumours, stance classification of individual posts towards rumours has gained attention in the past few years. Classification of stance in individual posts can then be useful to determine the veracity of a rumour. Research in this direction has looked at rumours in different domains, such as politics, natural disasters or terrorist attacks. However, work has been limited to in-domain experiments, i.e. training and testing data belong to the same domain. This presents the caveat that when one wants to deal with rumours in domains that are more obscure, training data tends to be scarce. This is the case of mental health disorders, which we explore here. Having annotated collections of tweets around rumours emerged in the context of breaking news, we study the performance stability when switching to the new domain of mental health disorders. Our study confirms that performance drops when we apply our trained model on a new domain, emphasising the differences in rumours across domains. We overcome this issue by using a little portion of the target domain data for training, which leads to a substantial boost in performance. We also release the new dataset with mental health rumours annotated for stance.", "title": "" }, { "docid": "c2a955e02d73537a9439e03c1d4d1788", "text": "BACKGROUND\nLyme neuroborreliosis (LNB) is a nervous system infection caused by Borrelia burgdorferi sensu lato (Bb).\n\n\nOBJECTIVES\nTo present evidence-based recommendations for diagnosis and treatment.\n\n\nMETHODS\nData were analysed according to levels of evidence as suggested by EFNS.\n\n\nRECOMMENDATIONS\nThe following three criteria should be fulfilled for definite LNB, and two of them for possible LNB: (i) neurological symptoms; (ii) cerebrospinal fluid (CSF) pleocytosis; (iii) Bb-specific antibodies produced intrathecally. PCR and CSF culture may be corroborative if symptom duration is <6 weeks, when Bb antibodies may be absent. PCR is otherwise not recommended. There is also not enough evidence to recommend the following tests for diagnostic purposes: microscope-based assays, chemokine CXCL13, antigen detection, immune complexes, lymphocyte transformation test, cyst formation, lymphocyte markers. Adult patients with definite or possible acute LNB (symptom duration <6 months) should be offered a single 14-day course of antibiotic treatment. Oral doxycycline (200 mg daily) and intravenous (IV) ceftriaxone (2 g daily) are equally effective in patients with symptoms confined to the peripheral nervous system, including meningitis (level A). Patients with CNS manifestations should be treated with IV ceftriaxone (2 g daily) for 14 days and late LNB (symptom duration >6 months) for 3 weeks (good practice points). Children should be treated as adults, except that doxycycline is contraindicated under 8 years of age (nine in some countries). If symptoms persist for more than 6 months after standard treatment, the condition is often termed post-Lyme disease syndrome (PLDS). Antibiotic therapy has no impact on PLDS (level A).", "title": "" } ]
scidocsrr
ca935053686984e47a488a8c1df7626f
Evaluating and Improving Penetration Testing in Web Services
[ { "docid": "67e331931037c45dd008871ca3aa6db7", "text": "Web services are often deployed with critical software bugs that may be maliciously exploited. Developers often trust on penetration testing tools to detect those vulnerabilities but the effectiveness of such technique is limited by the lack of information on the internal state of the tested services. This paper proposes a new approach for the detection of injection vulnerabilities in web services. The approach uses attack signatures and interface monitoring to increase the visibility of the penetration testing process, yet without needing to access web service's internals (as these are frequently not available). To demonstrate the feasibility of the approach we implemented a prototype tool to detect SQL Injection vulnerabilities in SOAP. An experimental evaluation comparing this prototype with three commercial penetration testers was conducted. Results show that our prototype is able to achieve much higher detection coverage than those testers while avoiding false positives, indicating that the proposed approach can be used in real development scenarios.", "title": "" }, { "docid": "d1f771fd1b0f8e5d91bbf65bc19aeb54", "text": "Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks.", "title": "" }, { "docid": "5025766e66589289ccc31e60ca363842", "text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.", "title": "" } ]
[ { "docid": "258601c560572a9c43823fe65481a3bf", "text": "Dewarping of documents captured with hand-held cameras in an uncontrolled environment has triggered a lot of interest in the scientific community over the last few years and many approaches have been proposed. However, there has been no comparative evaluation of different dewarping techniques so far. In an attempt to fill this gap, we have organized a page dewarping contest along with CBDAR 2007. We have created a dataset of 102 documents captured with a hand-held camera and have made it freely available online. We have prepared text-line, text-zone, and ASCII text ground-truth for the documents in this dataset. Three groups participated in the contest with their methods. In this paper we present an overview of the approaches that the participants used, the evaluation measure, and the dataset used in the contest. We report the performance of all participating methods. The evaluation shows that none of the participating methods was statistically significantly better than any other participating method.", "title": "" }, { "docid": "c83b5485eb812c27b86fcfa121dd41a1", "text": "One of the critical issues in control of grid connected power converters is synchronization with the utility voltage. Decoupled Double Synchronous Reference Frame Phase-Locked Loop (DDSRF-PLL) can detect positive and negative sequences of voltage under unbalanced conditions. However, its dynamic response is very sensitive to phase angle jumps in the voltage which results in large deviation in the estimated frequency. This paper investigates conventional DDSRF PLL and hybrid Decoupled Alfa Beta (DAB) PLL, and then proposes a new DDSRF PLL that outperforms two mentioned PLLs. The proposed PLL is a combination of DDSRF PLL and dual second-order generalized integrator (DSOGI) that estimates grid frequency with lesser overshoot than those of two other PLLs. The performance of the proposed PLL is verified by using time domain simulation studies in the DIgSILENT PowerFactory software environment.", "title": "" }, { "docid": "0e83d6d4ba37a6262c464ade8b29f157", "text": "We propose a novel approach for instance segmentation given an image of homogeneous object cluster (HOC). Our learning approach is one-shot because a single video of an object instance is captured and it requires no human annotation. Our intuition is that images of homogeneous objects can be effectively synthesized based on structure and illumination priors derived from real images. A novel solver is proposed that iteratively maximizes our structured likelihood to generate realistic images of HOC. Illumination transformation scheme is applied to make the real and synthetic images share the same illumination condition. Extensive experiments and comparisons are performed to verify our method. We build a dataset consisting of pixel-level annotated images of HOC. The dataset and code will be published with the paper.", "title": "" }, { "docid": "6243620ecc902b74a5a1e67a92f2082b", "text": "Wireless communication with unmanned aerial vehicles (UAVs) is a promising technology for future communication systems. In this paper, assuming that the UAV flies horizontally with a fixed altitude, we study energy-efficient UAV communication with a ground terminal via optimizing the UAV’s trajectory, a new design paradigm that jointly considers both the communication throughput and the UAV’s energy consumption. To this end, we first derive a theoretical model on the propulsion energy consumption of fixed-wing UAVs as a function of the UAV’s flying speed, direction, and acceleration. Based on the derived model and by ignoring the radiation and signal processing energy consumption, the energy efficiency of UAV communication is defined as the total information bits communicated normalized by the UAV propulsion energy consumed for a finite time horizon. For the case of unconstrained trajectory optimization, we show that both the rate-maximization and energy-minimization designs lead to vanishing energy efficiency and thus are energy-inefficient in general. Next, we introduce a simple circular UAV trajectory, under which the UAV’s flight radius and speed are jointly optimized to maximize the energy efficiency. Furthermore, an efficient design is proposed for maximizing the UAV’s energy efficiency with general constraints on the trajectory, including its initial/final locations and velocities, as well as minimum/maximum speed and acceleration. Numerical results show that the proposed designs achieve significantly higher energy efficiency for UAV communication as compared with other benchmark schemes.", "title": "" }, { "docid": "c9a4aff9871fa2f10c61bfb05b820141", "text": "With single computer's computation power not sufficing, need for sharing resources to manipulate and manage data through clouds is increasing rapidly. Hence, it is favorable to delegate computations or store data with a third party, the cloud provider. However, delegating data to third party poses the risks of data disclosure during computation. The problem can be addressed by carrying out computation without decrypting the encrypted data. The results are also obtained encrypted and can be decrypted at the user side. This requires modifying functions in such a way that they are still executable while privacy is ensured or to search an encrypted database. Homomorphic encryption provides security to cloud consumer data while preserving system usability. We propose a symmetric key homomorphic encryption scheme based on matrix operations with primitives that make it easily adaptable for different needs in various cloud computing scenarios.", "title": "" }, { "docid": "1e0ddc413489d21c8580ec2ecc6ac69e", "text": "We present several interrelated technical and empirical contributions to the problem of emotion-based music recommendation and show how they can be applied in a possible usage scenario. The contributions are (1) a new three-dimensional resonance-arousal-valence model for the representation of emotion expressed in music, together with methods for automatically classifying a piece of music in terms of this model, using robust regression methods applied to musical/acoustic features; (2) methods for predicting a listener’s emotional state on the assumption that the emotional state has been determined entirely by a sequence of pieces of music recently listened to, using conditional random fields and taking into account the decay of emotion intensity over time; and (3) a method for selecting a ranked list of pieces of music that match a particular emotional state, using a minimization iteration method. A series of experiments yield information about the validity of our operationalizations of these contributions. Throughout the article, we refer to an illustrative usage scenario in which all of these contributions can be exploited, where it is assumed that (1) a listener’s emotional state is being determined entirely by the music that he or she has been listening to and (2) the listener wants to hear additional music that matches his or her current emotional state. The contributions are intended to be useful in a variety of other scenarios as well.", "title": "" }, { "docid": "346d2ead797b07d9df0bccfb5bb07c9e", "text": "There is no doubt that, chitosan is considered as one of the most important biopolymers that can easily extracted from nature resources or synthesized in the chemical laboratories. Chitosan also display a suitable number of important properties in different fields of applications. Recently, chitosan has been reported as a perfect candidate as a trestle macromolecule for variable biological fields of study. This include, tissue engineering. cell culture and gene delivery, etc. Furthermore, chitosan has widely used in different types of industries which include: food, agriculture, fragrance, and even cosmetic industries. Besides that, chitosan derivatives is treated as excellent tool in waste water treatment. Therefore, the present work gives a simple selective overview for different modifications of Chitosan macromolecule with a special attention to its biological interest. Prior that, a closer look to its resources, chemical structure as well as general properties has been also determined which include its solubility character and its molecular weight. Furthermore, the chemistry of chitosan has been also mentioned with selected examples of each type of interaction. Finally a brief for sulfone based modified chitosan has been reported including classical methods of synthesis and its experimental variants.", "title": "" }, { "docid": "d9123053892ce671665a3a4a1694a57c", "text": "Visual perceptual learning (VPL) is defined as a long-term improvement in performance on a visual task. In recent years, the idea that conscious effort is necessary for VPL to occur has been challenged by research suggesting the involvement of more implicit processing mechanisms, such as reinforcement-driven processing and consolidation. In addition, we have learnt much about the neural substrates of VPL and it has become evident that changes in visual areas and regions beyond the visual cortex can take place during VPL.", "title": "" }, { "docid": "ff6420335374291508063663acb9dbe6", "text": "Many people are exposed to loss or potentially traumatic events at some point in their lives, and yet they continue to have positive emotional experiences and show only minor and transient disruptions in their ability to function. Unfortunately, because much of psychology's knowledge about how adults cope with loss or trauma has come from individuals who sought treatment or exhibited great distress, loss and trauma theorists have often viewed this type of resilience as either rare or pathological. The author challenges these assumptions by reviewing evidence that resilience represents a distinct trajectory from the process of recovery, that resilience in the face of loss or potential trauma is more common than is often believed, and that there are multiple and sometimes unexpected pathways to resilience.", "title": "" }, { "docid": "ec0ef1585583c2729e256149898be906", "text": "Over the last two decades, the organizational environment of Higher Education Institutions (HEI) in many countries, has fundamentally changed. Student numbers have continuously increased since the 1980s and transformed Higher Education (HE) from an exclusive offering for a small elite to a mass product. Consequently, universities had to increasingly deal with operations management issues such as capacity planning and efficiency. In order to enable this expansion and as means to facilitate competition, the funding structure of HEI's has changed. Greater reliance on tuition fees and industryfunded research exposed universities to the forces of the market. All in all, growth, commercialization and competition have transformed HEI's from publicly funded cosy elite institutions to large professional service operations with more demanding customers. Consequently, they increasingly look at private sector management practices to deal with the rising performance pressure. During the last two decades, Lean Management has received the reputation as a reliable method for achieving performance improvements by delivering higher quality at lower costs. From its origins in manufacturing, Lean has spread first to the service sector and is now successfully adopted by an increasing number of public sector organizations. Paradoxically, the enthusiasm for Lean in HE has so far been limited. A conceptual framework for applying Lean Management methodology in HEI's is presented in this paper.", "title": "" }, { "docid": "b47127a755d7bef1c5baf89253af46e7", "text": "In an effort to explain pro-environmental behavior, environmental sociologists often study environmental attitudes. While much of this work is atheoretical, the focus on attitudes suggests that researchers are implicitly drawing upon attitude theory in psychology. The present research brings sociological theory to environmental sociology by drawing on identity theory to understand environmentally responsive behavior. We develop an environment identity model of environmental behavior that includes not only the meanings of the environment identity, but also the prominence and salience of the environment identity and commitment to the environment identity. We examine the identity process as it relates to behavior, though not to the exclusion of examining the effects of environmental attitudes. The findings reveal that individual agency is important in influencing environmentally responsive behavior, but this agency is largely through identity processes, rather than attitude processes. This provides an important theoretical and empirical advance over earlier work in environmental sociology.", "title": "" }, { "docid": "f78acf94cabc87d4b53462987f5d77c7", "text": "Activity recognition has become an important function in many emerging computer vision applications e.g. automatic video surveillance system, human-computer interaction application, and video recommendation system, etc. In this paper, we propose a novel semantics based group activity recognition scheme, namely SBGAR, which achieves higher accuracy and efficiency than existing group activity recognition methods. SBGAR consists of two stages: in stage I, we use a LSTM model to generate a caption for each video frame; in stage II, another LSTM model is trained to predict the final activity categories based on these generated captions. We evaluate SBGAR using two well-known datasets: the Collective Activity Dataset and the Volleyball Dataset. Our experimental results show that SBGAR improves the group activity recognition accuracy with shorter computation time compared to the state-of-the-art methods.", "title": "" }, { "docid": "a7cde63f274baf7310a91aca6f77d920", "text": "Over the past decade, growing evidence indicates that the tumor microenvironment (TME) contributes with genomic/epigenomic aberrations of malignant cells to enhance cancer cells survival, invasion, and dissemination. Many factors, produced or de novo synthesized by immune, stromal, or malignant cells, acting in a paracrine and autocrine fashion, remodel TME and the adaptive immune response culminating in metastasis. Taking into account the recent accomplishments in the field of immune oncology and using metastatic colorectal cancer (mCRC) as a model, we propose that the evasion of the immune surveillance and metastatic spread can be achieved through a number of mechanisms that include (a) intrinsic plasticity and adaptability of immune and malignant cells to paracrine and autocrine stimuli or genotoxic stresses; (b) alteration of positional schemes of myeloid-lineage cells, produced by factors controlling the balance between tumour-suppressing and tumour-promoting activities; (c) acquisition by cancer cells of aberrant immune-phenotypic traits (NT5E/CD73, CD68, and CD163) that enhance the interactions among TME components through the production of immune-suppressive mediators. These properties may represent the driving force of metastatic progression and thus clinically exploitable for cancer prevention and therapy. In this review we summarize results and suggest new hypotheses that favour the growing impact of tumor-infiltrating immune cells on tumour progression, metastasis, and therapy resistance.", "title": "" }, { "docid": "7842812cf4614ed8711b6fa99fc2a1bc", "text": "Grids and peer-to-peer (P2P) networks have emerged as popular platforms for the next generation parallel and distributed computing. In these environments, resources are geographically distributed, managed and owned by various organizations with different policies, and interconnected by wide-area networks or the Internet. This introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, and dynamic resource conditions. In these dynamic distributed computing environments, it is hard and challenging to carry out resource management design studies in a repeatable and controlled manner as resources and users are autonomous and distributed across multiple organizations with their own policies. Therefore, simulations have emerged as the most feasible technique for analyzing policies for resource allocation. This paper presents emerging trends in distributed computing and their promises for revolutionizing the computing field, and identifies distinct characteristics and challenges in building them. We motivate opportunities for modeling and simulation communities and present our discrete-event grid simulation toolkit, called GridSim, used by researchers world-wide for investigating the design of utility-oriented computing systems such as Data Centers and Grids. We present various case studies on the use of GridSim in modeling and simulation of Business Grids, parallel applications scheduling, workflow scheduling, and service pricing and revenue management.", "title": "" }, { "docid": "587c6f30cda5f45a6b43d55197d2ed40", "text": "We present a mechanism that puts users in the center of control and empowers them to dictate the access to their collections of data. Revisiting the fundamental mechanisms in security for providing protection, our solution uses capabilities, access lists, and access rights following well-understood formal notions for reasoning about access. This contribution presents a practical, correct, auditable, transparent, distributed, and decentralized mechanism that is well-matched to the current emerging environments including Internet of Things, smart city, precision medicine, and autonomous cars. It is based on well-tested principles and practices used in distributed authorization, cryptocurrencies, and scalable computing.", "title": "" }, { "docid": "05edf6dc5d4b9726773f56dafc620619", "text": "Software systems running continuously for a long time tend to show degrading performance and an increasing failure occurrence rate, due to error conditions that accrue over time and eventually lead the system to failure. This phenomenon is usually referred to as \\textit{Software Aging}. Several long-running mission and safety critical applications have been reported to experience catastrophic aging-related failures. Software aging sources (i.e., aging-related bugs) may be hidden in several layers of a complex software system, ranging from the Operating System (OS) to the user application level. This paper presents a software aging analysis at the Operating System level, investigating software aging sources inside the Linux kernel. Linux is increasingly being employed in critical scenarios; this analysis intends to shed light on its behaviour from the aging perspective. The study is based on an experimental campaign designed to investigate the kernel internal behaviour over long running executions. By means of a kernel tracing tool specifically developed for this study, we collected relevant parameters of several kernel subsystems. Statistical analysis of collected data allowed us to confirm the presence of aging sources in Linux and to relate the observed aging dynamics to the monitored subsystems behaviour. The analysis output allowed us to infer potential sources of aging in the kernel subsystems.", "title": "" }, { "docid": "3e13cc47f0209c6a7d213dd170d3bcc1", "text": "The science of opinion analysis based on data from social networks and other forms of mass media has garnered the interest of the scientific community and the business world. Dealing with the increasing amount of information present on the Web is a critical task and requires efficient models developed by the emerging field of sentiment analysis. To this end, current research proposes an efficient approach to support emotion recognition and polarity detection in natural language text. In this paper, we show how to exploit the most recent technological tools and advances in Statistical Learning Theory (SLT) in order to efficiently build an Extreme Learning Machine (ELM) and assess the resultant model's performance when applied to big social data analysis. ELM represents a powerful learning tool, developed to overcome some issues in back-propagation networks. The main problem with ELM is in training them to work in the event of a large number of available samples, where the generalization performance has to be carefully assessed. For this reason, we propose an ELM implementation that exploits the Spark distributed in memory technology and show how to take advantage of the most recent advances in SLT in order to address the issue of selecting ELM hyperparameters that give the best generalization performance.", "title": "" }, { "docid": "9dc17e79614f2f4304f7728eadd301e4", "text": "A 9 bit 11 GS/s DAC is presented that achieves an SFDR of more than 50 dB across Nyquist and IM3 below -50 dBc across Nyquist. The DAC uses a two-times interleaved architecture to suppress spurs that typically limit DAC performance. Despite requiring two current-steering DACs for the interleaved architecture, the relative low demands on performance of these sub-DACs imply that they can be implemented in an area and power efficient way. Together with a quad-switching architecture to decrease demands on the power supply and bias generation and employing the multiplexer switches in triode, the total core area is only 0.04 mm 2 while consuming 110 mW from a single 1.0 V supply.", "title": "" }, { "docid": "732edb7fe28fa894fd186c5512e8cb8d", "text": "In knowledge management literature it is often pointed out that it is important to distinguish between data, information and knowledge. The generally accepted view sees data as simple facts that become information as data is combined into meaningful structures, which subsequently become knowledge as meaningful information is put into a context and when it can be used to make predictions. This view sees data as a prerequisite for information, and information as a prerequisite for knowledge. In this paper, I will explore the conceptual hierarchy of data, information and knowledge, showing that data emerges only after we have information, and that information emerges only after we already have knowledge. The reversed hierarchy of knowledge is shown to lead to a different approach in developing information systems that support knowledge management and organizational memory. It is also argued that this difference may have major implications for organizational flexibility and renewal.", "title": "" }, { "docid": "b2db53f203f2b168ec99bd8e544ff533", "text": "BACKGROUND\nThis study aimed to analyze the scientific outputs of esophageal and esophagogastric junction (EGJ) cancer and construct a model to quantitatively and qualitatively evaluate pertinent publications from the past decade.\n\n\nMETHODS\nPublications from 2007 to 2016 were retrieved from the Web of Science Core Collection database. Microsoft Excel 2016 (Redmond, WA) and the CiteSpace (Drexel University, Philadelphia, PA) software were used to analyze publication outcomes, journals, countries, institutions, authors, research areas, and research frontiers.\n\n\nRESULTS\nA total of 12,978 publications on esophageal and EGJ cancer were identified published until March 23, 2017. The Journal of Clinical Oncology had the largest number of publications, the USA was the leading country, and the University of Texas MD Anderson Cancer Center was the leading institution. Ajani JA published the most papers, and Jemal A had the highest co-citation counts. Esophageal squamous cell carcinoma ranked the first in research hotspots, and preoperative chemotherapy/chemoradiotherapy ranked the first in research frontiers.\n\n\nCONCLUSION\nThe annual number of publications steadily increased in the past decade. A considerable number of papers were published in journals with high impact factor. Many Chinese institutions engaged in esophageal and EGJ cancer research but significant collaborations among them were not noted. Jemal A, Van Hagen P, Cunningham D, and Enzinger PC were identified as good candidates for research collaboration. Neoadjuvant therapy and genome-wide association study in esophageal and EGJ cancer research should be closely observed.", "title": "" } ]
scidocsrr
4a168d6564caedd5a613b057ba0c946f
Robust classification of different fingerprint copies with deep neural networks for database penetration rate reduction
[ { "docid": "8886644f1200cd9c5bbe7536cb3d16c1", "text": "Fingerprint classification reduces the number of possible matches in automated fingerprint identification systems by categorizing fingerprints into predefined classes. Support vector machines (SVMs) are widely used in pattern classification and have produced high accuracy when performing fingerprint classification. In order to effectively apply SVMs to multi-class fingerprint classification systems, we propose a novel method in which the SVMs are generated with the one-vs-all (OVA) scheme and dynamically ordered with naïve Bayes classifiers. This is necessary to break the ties that frequently occur when working with multi-class classification systems that use OVA SVMs. More specifically, it uses representative fingerprint features as the FingerCode, singularities and pseudo ridges to train the OVA SVMs and naïve Bayes classifiers. The proposed method has been validated on the NIST-4 database and produced a classification accuracy of 90.8% for five-class classification with the statistical significance. The results show the benefits of integrating different fingerprint features as well as the usefulness of the proposed method in multi-class fingerprint classification. 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "63a3126fb97982e6d52265ae3d07c0cc", "text": "This work complements our previous efforts in generating realistic fingerprint images for test purposes. The main variability which characterizes the acquisition of a fingerprint through an on-line sensor is modeled and a sequence of steps is defined to derive a series of impressions from the same master-fingerprint. This allows large fingerprint databases to be randomly generated according to some given parameters. The experimental results validate our technique and prove that it can be very useful for performance evaluation, learning and testing in fingerprint-based systems.", "title": "" } ]
[ { "docid": "1f1dec890f0bcb25d240b7b7f576593c", "text": "Existing keyphrase generation studies suffer from the problems of generating duplicate phrases and deficient evaluation based on a fixed number of predicted phrases. We propose a recurrent generative model that generates multiple keyphrases sequentially from a text, with specific modules that promote generation diversity. We further propose two new metrics that consider a variable number of phrases. With both existing and proposed evaluation setups, our model demonstrates superior performance to baselines on three types of keyphrase generation datasets, including two newly introduced in this work: STACKEXCHANGE and TEXTWORLD ACG. In contrast to previous keyphrase generation approaches, our model generates sets of diverse keyphrases of a variable number.", "title": "" }, { "docid": "382eb7a0e8bc572506a40bf3cbe6fd33", "text": "The long-term ambition of the Tactile Internet is to enable a democratization of skill, and how it is being delivered globally. An integral part of this is to be able to transmit touch in perceived real-time, which is enabled by suitable robotics and haptics equipment at the edges, along with an unprecedented communications network. The fifth generation (5G) mobile communications systems will underpin this emerging Internet at the wireless edge. This paper presents the most important technology concepts, which lay at the intersection of the larger Tactile Internet and the emerging 5G systems. The paper outlines the key technical requirements and architectural approaches for the Tactile Internet, pertaining to wireless access protocols, radio resource management aspects, next generation core networking capabilities, edge-cloud, and edge-AI capabilities. The paper also highlights the economic impact of the Tactile Internet as well as a major shift in business models for the traditional telecommunications ecosystem.", "title": "" }, { "docid": "7432009332e13ebc473c9157505cb59c", "text": "The use of future contextual information is typically shown to be helpful for acoustic modeling. However, for the recurrent neural network (RNN), it’s not so easy to model the future temporal context effectively, meanwhile keep lower model latency. In this paper, we attempt to design a RNN acoustic model that being capable of utilizing the future context effectively and directly, with the model latency and computation cost as low as possible. The proposed model is based on the minimal gated recurrent unit (mGRU) with an input projection layer inserted in it. Two context modules, temporal encoding and temporal convolution, are specifically designed for this architecture to model the future context. Experimental results on the Switchboard task and an internal Mandarin ASR task show that, the proposed model performs much better than long short-term memory (LSTM) and mGRU models, whereas enables online decoding with a maximum latency of 170 ms. This model even outperforms a very strong baseline, TDNN-LSTM, with smaller model latency and almost half less parameters.", "title": "" }, { "docid": "3bdc2c6a67976108942efd708af8cb2d", "text": "This contribution introduces a new transmission scheme for multiple-input multiple-output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM) systems. The new scheme is efficient and suitable especially for symmetric channels such as the link between two base stations or between two antennas on radio beam transmission. This Thesis presents the performance analysis of V-BLAST based multiple input multiple output orthogonal frequency division multiplexing (MIMO-OFDM) system with respect to bit error rate per signal to noise ratio (BER/SNR) for various detection techniques. A 2X2 MIMO-OFDM system is used for the performance evaluation. The simulation results shows that the performance of V-BLAST based detection techniques is much better than the conventional methods. Alamouti Space Time Block Code (STBC) scheme is used with orthogonal designs over multiple antennas which showed simulated results are identical to expected theoretical results. With this technique both Bit Error Rate (BER) and maximum diversity gain are achieved by increasing number of antennas on either side. This scheme is efficient in all the applications where system capacity is limited by multipath fading.", "title": "" }, { "docid": "f3b695d0e3d8cca48c9f6f1b3e4d8dbe", "text": "Neural networks that compute over graph structures are a natural fit for problems in a variety of domains, including natural language (parse trees) and cheminformatics (molecular graphs). However, since the computation graph has a different shape and size for every input, such networks do not directly support batched training or inference. They are also difficult to implement in popular deep learning libraries, which are based on static data-flow graphs. We introduce a technique called dynamic batching, which not only batches together operations between different input graphs of dissimilar shape, but also between different nodes within a single input graph. The technique allows us to create static graphs, using popular libraries, that emulate dynamic computation graphs of arbitrary shape and size. We further present a high-level library1 of compositional blocks that simplifies the creation of dynamic graph models. Using the library, we demonstrate concise and batch-wise parallel implementations for a variety of models from the literature.", "title": "" }, { "docid": "3b7c0a822c5937ac9e4d702bb23e3432", "text": "In a video surveillance system with static cameras, object segmentation often fails when part of the object has similar color with the background, resulting in poor performance of the subsequent object tracking. Multiple kernels have been utilized in object tracking to deal with occlusion, but the performance still highly depends on segmentation. This paper presents an innovative system, named Multiple-kernel Adaptive Segmentation and Tracking (MAST), which dynamically controls the decision thresholds of background subtraction and shadow removal around the adaptive kernel regions based on the preliminary tracking results. Then the objects are tracked for the second time according to the adaptively segmented foreground. Evaluations of both segmentation and tracking on benchmark datasets and our own recorded video sequences demonstrate that the proposed method can successfully track objects in similar-color background and/or shadow areas with favorable segmentation performance.", "title": "" }, { "docid": "472ff656dc35c5ed37aae6e3a82e3192", "text": "Status of This Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract JavaScript Object Notation (JSON) is a lightweight, text-based, language-independent data interchange format. It was derived from the ECMAScript Programming Language Standard. JSON defines a small set of formatting rules for the portable representation of structured data.", "title": "" }, { "docid": "85e48d5b4add8c21dee6538f6ee714a4", "text": "ion This approach creates a series of intermediate abstractions up to a syndrome category from the individual data (e.g., signs, lab tests) for syndromes indicative of illness due to an agent of bioterrorism. The BioStorm system (Buckeridge et al., 2002; Crubézy, O’Connor, Pincus, & Musen, 2005; Shahar & Musen, 1996)", "title": "" }, { "docid": "a10a51d1070396e1e8a8b186af18f87d", "text": "An upcoming trend for automobile manufacturers is to provide firmware updates over the air (FOTA) as a service. Since the firmware controls the functionality of a vehicle, security is important. To this end, several secure FOTA protocols have been developed. However, the secure FOTA protocols only solve the security for the transmission of the firmware binary. Once the firmware is downloaded, an attacker could potentially modify its contents before it is flashed to the corresponding ECU'S ROM. Thus, there is a need to extend the flashing procedure to also verify that the correct firmware has been flashed to the ECU. We present a framework for self-verification of firmware updates over the air. We include a verification code in the transmission to the vehicle, and after the firmware has been flashed, the integrity of the memory contents can be verified using the verification code. The verification procedure entails only simple hash functions and is thus suitable for the limited resources in the vehicle. Virtualization techniques are employed to establish a trusted computing base in the ECU, which is then used to perform the verification. The proposed framework allows the ECU itself to perform self-verification and can thus ensure the successful flashing of the firmware.", "title": "" }, { "docid": "747ca83d8a4be084a30bbba3e96f248c", "text": "Introduction to chapter. Due to its cryptographic and operational key features such as the one-way function property, high speed and a fixed output size independent of input size the hash algorithm is one of the most important cryptographic primitives. A critical drawback of most cryptographic algorithms is the large computational overheads. This is getting more critical since the data amount to process or communicate is dramatically increasing. In many of such cases, a proper use of the hash algorithm effectively reduces the computational overhead. Digital signature algorithm and the message authentication are the most common applications of the hash algorithms. The increasing data size also motivates hardware designers to have a throughput optimal architecture of a given hash algorithm. In this chapter, some popular hash algorithms and their cryptanalysis are briefly introduced, and a design methodology for throughput optimal architectures of MD4-based hash algorithms is described in detail.", "title": "" }, { "docid": "c90eae76dbde16de8d52170c2715bd7a", "text": "Several literatures converge on the idea that approach and avoidance/withdrawal behaviors are managed by two partially distinct self-regulatory system. The functions of these systems also appear to be embodied in discrepancyreducing and -enlarging feedback loops, respectively. This article describes how the feedback construct has been used to address these two classes of action and the affective experiences that relate to them. Further discussion centers on the development of measures of individual differences in approach and avoidance tendencies, and how these measures can be (and have been) used as research tools, to investigate whether other phenomena have their roots in approach or avoidance.", "title": "" }, { "docid": "e45ad997b5a4c7f1ed52c30f4156cd81", "text": "The somatic marker hypothesis provides a systems-level neuroanatomical and cognitive framework for decision making and the influence on it by emotion. The key idea of this hypothesis is that decision making is a process that is influenced by marker signals that arise in bioregulatory processes, including those that express themselves in emotions and feelings. This influence can occur at multiple levels of operation, some of which occur consciously and some of which occur non-consciously. Here we review studies that confirm various predictions from the hypothesis. The orbitofrontal cortex represents one critical structure in a neural system subserving decision making. Decision making is not mediated by the orbitofrontal cortex alone, but arises from large-scale systems that include other cortical and subcortical components. Such structures include the amygdala, the somatosensory/insular cortices and the peripheral nervous system. Here we focus only on the role of the orbitofrontal cortex in decision making and emotional processing, and the relationship between emotion, decision making and other cognitive functions of the frontal lobe, namely working memory.", "title": "" }, { "docid": "3f4be9abdb18ae677641fb788eaafd99", "text": "This project has progressed in multiple stages. In the first year of the project we studied student understanding of basic concepts related to voltage, current, and power in the domain of DC and AC circuits. Protocol analyses conducted in the context of problem solving tasks demonstrated that electricity is a hard domain to learn and understand, and most beginning student knowledge is \"in pieces.\" Students had difficulty in differentiating key concepts such as voltage and current, lacked the ability to map from physical processes to abstract notation, and experienced problems because they had incomplete mappings for metaphors and analogies. Tlie \"invisible nature\" of electricity contributed to the complexity of the domain. Students, in general, had very few preconceived notions about electricity, and most of what they learned was gained through instruction. In year two, our emphasis shifted to a study of how to prepare students to learn DC and AC circuit concepts they had difficulty with, and then to assess how instruction in these topics improved problem solving behavior. Tliis led to the development of our framework for Assessment of Domain Learnability (ADL) and the implementation of a computer environment, STAR-Legacy, that integrates instruction with dynamic assessment. Preliminary studies depict the effectiveness of this approach in improving student problem solving capability in the DC circuit domain. To further characterize problem solving ability with more advanced AC concepts we developed a set of problems dealing with voltage regulators and filters and performed protocol studies on a set of undergraduate and graduate students at Vanderbilt University:. The results of this study are also reported.", "title": "" }, { "docid": "1dee6d60a94e434dd6d3b6754e9cd3f3", "text": "The barrier function of the intestine is essential for maintaining the normal homeostasis of the gut and mucosal immune system. Abnormalities in intestinal barrier function expressed by increased intestinal permeability have long been observed in various gastrointestinal disorders such as Crohn's disease (CD), ulcerative colitis (UC), celiac disease, and irritable bowel syndrome (IBS). Imbalance of metabolizing junction proteins and mucosal inflammation contributes to intestinal hyperpermeability. Emerging studies exploring in vitro and in vivo model system demonstrate that Rho-associated coiled-coil containing protein kinase- (ROCK-) and myosin light chain kinase- (MLCK-) mediated pathways are involved in the regulation of intestinal permeability. With this perspective, we aim to summarize the current state of knowledge regarding the role of inflammation and ROCK-/MLCK-mediated pathways leading to intestinal hyperpermeability in gastrointestinal disorders. In the near future, it may be possible to specifically target these specific pathways to develop novel therapies for gastrointestinal disorders associated with increased gut permeability.", "title": "" }, { "docid": "16d949f6915cbb958cb68a26c6093b6b", "text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.", "title": "" }, { "docid": "f3aaf555028a0c53bec688c0a8e7e95d", "text": "ABSTRACT Translating natural language questions to semantic representations such as SPARQL is a core challenge in open-domain question answering over knowledge bases (KB-QA). Existing methods rely on a clear separation between an offline training phase, where a model is learned, and an online phase where this model is deployed. Two major shortcomings of such methods are that (i) they require access to a large annotated training set that is not always readily available and (ii) they fail on questions from before-unseen domains. To overcome these limitations, this paper presents NEQA, a continuous learning paradigm for KB-QA. Offline, NEQA automatically learns templates mapping syntactic structures to semantic ones from a small number of training question-answer pairs. Once deployed, continuous learning is triggered on cases where templates are insufficient. Using a semantic similarity function between questions and by judicious invocation of non-expert user feedback, NEQA learns new templates that capture previously-unseen syntactic structures. This way, NEQA gradually extends its template repository. NEQA periodically re-trains its underlying models, allowing it to adapt to the language used after deployment. Our experiments demonstrate NEQA’s viability, with steady improvement in answering quality over time, and the ability to answer questions from new domains.", "title": "" }, { "docid": "e53de7a588d61f513a77573b7b27f514", "text": "In the past, there have been dozens of studies on automatic authorship classification, and many of these studies concluded that the writing style is one of the best indicators for original authorship. From among the hundreds of features which were developed, syntactic features were best able to reflect an author's writing style. However, due to the high computational complexity for extracting and computing syntactic features, only simple variations of basic syntactic features such as function words, POS(Part of Speech) tags, and rewrite rules were considered. In this paper, we propose a new feature set of k-embedded-edge subtree patterns that holds more syntactic information than previous feature sets. We also propose a novel approach to directly mining them from a given set of syntactic trees. We show that this approach reduces the computational burden of using complex syntactic structures as the feature set. Comprehensive experiments on real-world datasets demonstrate that our approach is reliable and more accurate than previous studies.", "title": "" }, { "docid": "5443a07fe5f020972cbdce8f5996a550", "text": "The training of severely disabled individuals on the use of electric power wheelchairs creates many challenges, particularly in the case of children. The adjustment of equipment and training on a per-patient basis in an environment with limited specialists and resources often leads to a reduced amount of training time per patient. Virtual reality rehabilitation has recently been proven an effective way to supplement patient rehabilitation, although some important challenges remain including high setup/equipment costs and time-consuming continual adjustments to the simulation as patients improve. We propose a design for a flexible, low-cost rehabilitation system that uses virtual reality training and games to engage patients in effective instruction on the use of powered wheelchairs. We also propose a novel framework based on Bayesian networks for self-adjusting adaptive training in virtual rehabilitation environments. Preliminary results from a user evaluation and feedback from our rehabilitation specialist collaborators support the effectiveness of our approach.", "title": "" }, { "docid": "db597c88e71a8397b81216282d394623", "text": "In many real applications, graph data is subject to uncertainties due to incompleteness and imprecision of data. Mining such uncertain graph data is semantically different from and computationally more challenging than mining conventional exact graph data. This paper investigates the problem of mining uncertain graph data and especially focuses on mining frequent subgraph patterns on an uncertain graph database. A novel model of uncertain graphs is presented, and the frequent subgraph pattern mining problem is formalized by introducing a new measure, called expected support. This problem is proved to be NP-hard. An approximate mining algorithm is proposed to find a set of approximately frequent subgraph patterns by allowing an error tolerance on expected supports of discovered subgraph patterns. The algorithm uses efficient methods to determine whether a subgraph pattern can be output or not and a new pruning method to reduce the complexity of examining subgraph patterns. Analytical and experimental results show that the algorithm is very efficient, accurate, and scalable for large uncertain graph databases. To the best of our knowledge, this paper is the first one to investigate the problem of mining frequent subgraph patterns from uncertain graph data.", "title": "" }, { "docid": "623c78e515abee9830eb0b79e773dcec", "text": "The main focus in this research paper is to experiment deeply with, and find alternative solutions to the image segmentation and character recognition problems within the License Plate Recognition framework. Three main stages are identified in such applications. First, it is necessary to locate and extract the license plate region from a larger scene image. Second, having a license plate region to work with, the alphanumeric characters in the plate need to be extracted from the background. Third, deliver them to an character system (BOX APPROACH)for recognition. In order to identify a vehicle by reading its license plate successfully, it is obviously necessary to locate the plate in the scene image provided by some acquisition system (e.g. video or still camera). Locating the region of interest helps in dramatically reducing both the computational expense and algorithm complexity. For example, a currently common1024x768 resolution image contains a total of 786,432pixels, while the region of interest (in this case a license plate) may account for only 10% of the image area. Also, the input to the following segmentation and recognition stages is simplified, resulting in easier algorithm design and shorter computation times. The paper mainly work with the standard license plates but the techniques, algorithms and parameters that is be used can be adjusted easily for any similar number plates even with other alpha-numeric set.", "title": "" } ]
scidocsrr
773a4eaaf8a381f8d6511cb5e81af6ab
REAL-TIME FULLY AUTOMATED RESTAURANT MANAGEMENT AND COMMUNICATION SYSTEM “ RESTO ”
[ { "docid": "897efb599e554bf453a7b787c5874d48", "text": "The Rampant growth of wireless technology and Mobile devices in this era is creating a great impact on our lives. Some early efforts have been made to combine and utilize both of these technologies in advancement of hospitality industry. This research work aims to automate the food ordering process in restaurant and also improve the dining experience of customers. In this paper we discuss about the design & implementation of automated food ordering system with real time customer feedback (AOS-RTF) for restaurants. This system, implements wireless data access to servers. The android application on user’s mobile will have all the menu details. The order details from customer’s mobile are wirelessly updated in central database and subsequently sent to kitchen and cashier respectively. The restaurant owner can manage the menu modifications easily. The wireless application on mobile devices provide a means of convenience, improving efficiency and accuracy for restaurants by saving time, reducing human errors and real-time customer feedback. This system successfully over comes the drawbacks in earlier PDA based food ordering system and is less expensive and more effective than the multi-touchable restaurant management systems.", "title": "" } ]
[ { "docid": "fa91331ef31de20ae63cc6c8ab33f062", "text": "Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.", "title": "" }, { "docid": "5cccc7cc748d3461dc3c0fb42a09245f", "text": "The self and attachment difficulties associated with chronic childhood abuse and other forms of pervasive trauma must be understood and addressed in the context of the therapeutic relationship for healing to extend beyond resolution of traditional psychiatric symptoms and skill deficits. The authors integrate contemporary research and theory about attachment and complex developmental trauma, including dissociation, and apply it to psychotherapy of complex trauma, especially as this research and theory inform the therapeutic relationship. Relevant literature on complex trauma and attachment is integrated with contemporary trauma theory as the background for discussing relational issues that commonly arise in this treatment, highlighting common challenges such as forming a therapeutic alliance, managing frame and boundaries, and working with dissociation and reenactments.", "title": "" }, { "docid": "7af9293fbe12f3e859ee579d0f8739a5", "text": "We present the findings from a Dutch field study of 30 outsourcing deals totaling to more than 100 million Euro, where both customers and corresponding IT-outsourcing providers participated. The main objective of the study was to examine from a number of well-known factors whether they discriminate between IT-outsourcing success and failure in the early phase of service delivery and to determine their impact on the chance on a successful deal. We investigated controllable factors to increase the odds during sourcing and rigid factors as a warning sign before closing a deal. Based on 250 interviews we collected 28 thousand data points. From the data and the perceived failure or success of the closed deals we investigated the discriminative power of the determinants (ex post). We found three statistically significant controllable factors that discriminated in an early phase between failure and success. They are: working according to the transition plan, demand management and, to our surprise, communication within the supplier organisation (so not between client and supplier). These factors also turned out to be the only significant factors for a (logistic) model predicting the chance of a successful IT-outsourcing. Improving demand management and internal communication at the supplier increases the odds the most. Sticking to the transition plan only modestly. Other controllable factors were not significant in our study. They are managing the business case, transfer of staff or assets, retention of expertise and communication within the client organisation. Of the rigid factors, the motive to outsource, cultural differences, and the type of work were insignificant. The motive of the supplier was significant: internal motivations like increasing profit margins or business volume decreased the chance of success while external motivations like increasing market share or becoming a player increased the success rate. From the data we inferred that the degree of experience with sourcing did not show to be a convincing factor of success. Hiring sourcing consultants worked contra-productive: it lowered chances of success.", "title": "" }, { "docid": "ac56eb533e3ae40b8300d4269fd2c08f", "text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.", "title": "" }, { "docid": "e66ce20b22d183d5b1d9aec2cdc1f736", "text": "Performance tests were carried out for a microchannel printed circuit heat exchanger (PCHE), which was fabricated with micro photo-etching and diffusion bonding technologies. The microchannel PCHE was tested for Reynolds numbers in the range of 100‒850 varying the hot-side inlet temperature between 40 °C–50 °C while keeping the cold-side temperature fixed at 20 °C. It was found that the average heat transfer rate and heat transfer performance of the countercurrrent configuration were 6.8% and 10%‒15% higher, respectively, than those of the parallel flow. The average heat transfer rate, heat transfer performance and pressure drop increased with increasing Reynolds number in all experiments. Increasing inlet temperature did not affect the heat transfer performance while it slightly decreased the pressure drop in the experimental range considered. Empirical correlations have been developed for the heat transfer coefficient and pressure drop factor as functions of the Reynolds number.", "title": "" }, { "docid": "269e1c0d737beafd10560360049c6ee3", "text": "There is no doubt that Social media has gained wider acceptability and usability and is also becoming probably the most important communication tools among students especially at the higher level of educational pursuit. As much as social media is viewed as having bridged the gap in communication that existed. Within the social media Facebook, Twitter and others are now gaining more and more patronage. These websites and social forums are way of communicating directly with other people socially. Social media has the potentials of influencing decision-making in a very short time regardless of the distance. On the bases of its influence, benefits and demerits this study is carried out in order to highlight the potentials of social media in the academic setting by collaborative learning and improve the students' academic performance. The results show that collaborative learning positively and significantly with interactive with peers, interactive with teachers and engagement which impact the students’ academic performance.", "title": "" }, { "docid": "f84c399ff746a8721640e115fd20745e", "text": "Self-interference cancellation invalidates a long-held fundamental assumption in wireless network design that radios can only operate in half duplex mode on the same channel. Beyond enabling true in-band full duplex, which effectively doubles spectral efficiency, self-interference cancellation tremendously simplifies spectrum management. Not only does it render entire ecosystems like TD-LTE obsolete, it enables future networks to leverage fragmented spectrum, a pressing global issue that will continue to worsen in 5G networks. Self-interference cancellation offers the potential to complement and sustain the evolution of 5G technologies toward denser heterogeneous networks and can be utilized in wireless communication systems in multiple ways, including increased link capacity, spectrum virtualization, any-division duplexing (ADD), novel relay solutions, and enhanced interference coordination. By virtue of its fundamental nature, self-interference cancellation will have a tremendous impact on 5G networks and beyond.", "title": "" }, { "docid": "ddc3241c09a33bde1346623cf74e6866", "text": "This paper presents a new technique for predicting wind speed and direction. This technique is based on using a linear time-series-based model relating the predicted interval to its corresponding one- and two-year old data. The accuracy of the model for predicting wind speeds and directions up to 24 h ahead have been investigated using two sets of data recorded during winter and summer season at Madison weather station. Generated results are compared with their corresponding values when using the persistent model. The presented results validate the effectiveness and accuracy of the proposed prediction model for wind speed and direction.", "title": "" }, { "docid": "9157266c7dea945bf5a68f058836e681", "text": "For the task of implicit discourse relation recognition, traditional models utilizing manual features can suffer from data sparsity problem. Neural models provide a solution with distributed representations, which could encode the latent semantic information, and are suitable for recognizing semantic relations between argument pairs. However, conventional vector representations usually adopt embeddings at the word level and cannot well handle the rare word problem without carefully considering morphological information at character level. Moreover, embeddings are assigned to individual words independently, which lacks of the crucial contextual information. This paper proposes a neural model utilizing context-aware character-enhanced embeddings to alleviate the drawbacks of the current word level representation. Our experiments show that the enhanced embeddings work well and the proposed model obtains state-of-the-art results.", "title": "" }, { "docid": "27bcbde431c340db7544b58faa597fb7", "text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.", "title": "" }, { "docid": "1568a9bb47ca0ef28bccf6fdeaad87b7", "text": "Many Android apps use SSL/TLS to transmit sensitive information securely. However, developers often provide their own implementation of the standard SSL/TLS certificate validation process. Unfortunately, many such custom implementations have subtle bugs, have built-in exceptions for self-signed certificates, or blindly assert all certificates are valid, leaving many Android apps vulnerable to SSL/TLS Man-in-the-Middle attacks. In this paper, we present SMV-HUNTER, a system for the automatic, large-scale identification of such vulnerabilities that combines both static and dynamic analysis. The static component detects when a custom validation procedure has been given, thereby identifying potentially vulnerable apps, and extracts information used to guide the dynamic analysis, which then uses user interface enumeration and automation techniques to trigger the potentially vulnerable code under an active Man-in-the-Middle attack. We have implemented SMV-HUNTER and evaluated it on 23,418 apps downloaded from the Google Play market, of which 1,453 apps were identified as being potentially vulnerable by static analysis, with an average overhead of approximately 4 seconds per app, running on 16 threads in parallel. Among these potentially vulnerable apps, 726 were confirmed vulnerable using our dynamic analysis, with an average overhead of about 44 seconds per app, running on 8 emulators in parallel.", "title": "" }, { "docid": "a2a85b11d4bd6cc6cc709ae1efd11322", "text": "This paper presents a new adoption framework i.e. Individual, Technology, Organization and Environment (I-TOE) to address the factors influencing computer-assisted auditing tools (CAATs) acceptance in public audit firms. CAATs are audit technology that helps in achieving effective and efficient audit work. While CAATs adoption varies among audit departments, prior studies focused narrowly on CAATs acceptance issues from the individual perspective and no comprehensive study has been done that focused on both organization and individual standpoints. Realizing this gap, this paper aims to predict CAATs adoption factors using the I-TOE framework. I-TOE stresses on the relationship of Individuals factors (i.e. performance expectancy, effort expectancy, social influence, facilitating condition, hedonic motivation and habit), CAATs Technology (i.e. technology cost-benefit, risk and technology fit), Organization characteristics (i.e. size, readiness and top management), and Environment factors (i.e. client’s AIS complexity, competitive pressure and professional accounting body regulations) towards CAATs acceptance. It integrates both Unified Theory of Acceptance and Use of Technology 2 and Technology-Organization-Environment framework. I-TOE provides a comprehensive model that helps audit firms and regulatory bodies to develop strategies and policies to increase CAATs adoption. Empirical study through questionnaire survey will be conducted to validate I-TOE model.", "title": "" }, { "docid": "05f941acd4b2bd1188c7396d7edbd684", "text": "A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. 1998 ACM Subject Classification C.2.4 Distributed Systems, D.1.3 Concurrent Programming", "title": "" }, { "docid": "6be3f84e371874e2df32de9cb1d92482", "text": "We present an accurate and efficient stereo matching method using locally shared labels, a new labeling scheme that enables spatial propagation in MRF inference using graph cuts. They give each pixel and region a set of candidate disparity labels, which are randomly initialized, spatially propagated, and refined for continuous disparity estimation. We cast the selection and propagation of locally-defined disparity labels as fusion-based energy minimization. The joint use of graph cuts and locally shared labels has advantages over previous approaches based on fusion moves or belief propagation, it produces submodular moves deriving a subproblem optimality, enables powerful randomized search, helps to find good smooth, locally planar disparity maps, which are reasonable for natural scenes, allows parallel computation of both unary and pairwise costs. Our method is evaluated using the Middlebury stereo benchmark and achieves first place in sub-pixel accuracy.", "title": "" }, { "docid": "e380710014dd33734636f077a59f1b62", "text": "Since the work of Golgi and Cajal, light microscopy has remained a key tool for neuroscientists to observe cellular properties. Ongoing advances have enabled new experimental capabilities using light to inspect the nervous system across multiple spatial scales, including ultrastructural scales finer than the optical diffraction limit. Other progress permits functional imaging at faster speeds, at greater depths in brain tissue, and over larger tissue volumes than previously possible. Portable, miniaturized fluorescence microscopes now allow brain imaging in freely behaving mice. Complementary progress on animal preparations has enabled imaging in head-restrained behaving animals, as well as time-lapse microscopy studies in the brains of live subjects. Mouse genetic approaches permit mosaic and inducible fluorescence-labeling strategies, whereas intrinsic contrast mechanisms allow in vivo imaging of animals and humans without use of exogenous markers. This review surveys such advances and highlights emerging capabilities of particular interest to neuroscientists.", "title": "" }, { "docid": "6abe1b7806f6452bbcc087b458a7ef96", "text": "We demonstrate distributed, online, and real-time cooperative localization and mapping between multiple robots operating throughout an unknown environment using indirect measurements. We present a novel Expectation Maximization (EM) based approach to efficiently identify inlier multi-robot loop closures by incorporating robot pose uncertainty, which significantly improves the trajectory accuracy over long-term navigation. An EM and hypothesis based method is used to determine a common reference frame. We detail a 2D laser scan correspondence method to form robust correspondences between laser scans shared amongst robots. The implementation is experimentally validated using teams of aerial vehicles, and analyzed to determine its accuracy, computational efficiency, scalability to many robots, and robustness to varying environments. We demonstrate through multiple experiments that our method can efficiently build maps of large indoor and outdoor environments in a distributed, online, and real-time setting.", "title": "" }, { "docid": "b25b7100c035ad2953fb43087ede1625", "text": "In this paper, a novel 10W substrate integrated waveguide (SIW) high power amplifier (HPA) designed with SIW matching network (MN) is presented. The SIW MN is connected with microstrip line using microstrip-to-SIW transition. An inductive metallized post in SIW is employed to realize impedance matching. At the fundamental frequency of 2.14 GHz, the impedance matching is realized by moving the position of the inductive metallized post in the SIW. Both the input and output MNs are designed with the proposed SIW-based MN concept. One SIW-based 10W HPA using GaN HEMT at 2.14 GHz is designed, fabricated, and measured. The proposed SIW-based HPA can be easily connected with any microstrip circuit with microstrip-to-SIW transition. Measured results show that the maximum power added efficiency (PAE) is 65.9 % with 39.8 dBm output power and the maximum gain is 20.1 dB with 30.9 dBm output power at 2.18 GHz. The size of the proposed SIW-based HPA is comparable with other microstrip-based PAs designed at the operating frequency.", "title": "" }, { "docid": "5097aae222f76023cf1d6dbe7765e504", "text": "In this article, we introduce a prototype of an innovative technology for proving the origins of captured digital media. In an era of fake news, when someone shows us a video or picture of some event, how can we trust its authenticity? It seems that the public no longer believe that traditional media is a reliable reference of fact, perhaps due, in part, to the onset of many diverse sources of conflicting information, via social media. Indeed, the issue of \"fake\" reached a crescendo during the 2016 U.S. Presidential Election, when the winner, Donald Trump, claimed that The New York Times was trying to discredit him by pushing disinformation. Current research into overcoming the problem of fake news does not focus on establishing the ownership of media resources used in such stories-the blockchain-based application introduced in this article is technology that is capable of indicating the authenticity of digital media. Put simply, using the trust mechanisms of blockchain technology, the tool can show, beyond doubt, the provenance of any source of digital media, including images used out of context in attempts to mislead. Although the application is an early prototype and its capability to find fake resources is somewhat limited, we outline future improvements that would overcome such limitations. Furthermore, we believe that our application (and its use of blockchain technology and standardized metadata) introduces a novel approach to overcoming falsities in news reporting and the provenance of media resources used therein. However, while our application has the potential to be able to verify the originality of media resources, we believe that technology is only capable of providing a partial solution to fake news. That is because it is incapable of proving the authenticity of a news story as a whole. We believe that takes human skills.", "title": "" }, { "docid": "87068ab038d08f9e1e386bc69ee8a5b2", "text": "The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms.", "title": "" }, { "docid": "6a3210307c98b4311271c29da142b134", "text": "Accelerating innovation in renewable energy (RE) requires not just more finance, but finance servicing the entire innovation landscape. Given that finance is not ‘neutral’, more information is required on the quality of finance that meets technology and innovation stage-specific financing needs for the commercialization of RE technologies. We investigate the relationship between different financial actors with investment in different RE technologies. We construct a new deal-level dataset of global RE asset finance from 2004 to 2014 based on Bloomberg New Energy Finance data, that distinguishes 10 investor types (e.g. private banks, public banks, utilities) and 11 RE technologies into which they invest. We also construct a heuristic investment risk measure that varies with technology, time and country of investment. We find that particular investor types have preferences for particular risk levels, and hence particular types of RE. Some investor types invested into far riskier portfolios than others, and financing of individual high-risk technologies depended on investment by specific investor types. After the 2008 financial crisis, state-owned or controlled companies and banks emerged as the high-risk taking locomotives of RE asset finance. We use these preliminary results to formulate new questions for future RE policy, and encourage further research.", "title": "" } ]
scidocsrr
6ab14604773495f791954ce90412f07b
Underwater SLAM: Challenges, state of the art, algorithms and a new biologically-inspired approach
[ { "docid": "692207fdd7e27a04924000648f8b1bbf", "text": "Many animals, on air, water, or land, navigate in three-dimensional (3D) environments, yet it remains unclear how brain circuits encode the animal's 3D position. We recorded single neurons in freely flying bats, using a wireless neural-telemetry system, and studied how hippocampal place cells encode 3D volumetric space during flight. Individual place cells were active in confined 3D volumes, and in >90% of the neurons, all three axes were encoded with similar resolution. The 3D place fields from different neurons spanned different locations and collectively represented uniformly the available space in the room. Theta rhythmicity was absent in the firing patterns of 3D place cells. These results suggest that the bat hippocampus represents 3D volumetric space by a uniform and nearly isotropic rate code.", "title": "" } ]
[ { "docid": "2292c60d69c94f31c2831c2f21c327d8", "text": "With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis.", "title": "" }, { "docid": "f7daa0d175d4a7ae8b0869802ff3c4ab", "text": "Several consumer speech devices feature voice interfaces that perform on-device keyword spotting to initiate user interactions. Accurate on-device keyword spotting within a tight CPU budget is crucial for such devices. Motivated by this, we investigated two ways to improve deep neural network (DNN) acoustic models for keyword spotting without increasing CPU usage. First, we used low-rank weight matrices throughout the DNN. This allowed us to increase representational power by increasing the number of hidden nodes per layer without changing the total number of multiplications. Second, we used knowledge distilled from an ensemble of much larger DNNs used only during training. We systematically evaluated these two approaches on a massive corpus of far-field utterances. Alone both techniques improve performance and together they combine to give significant reductions in false alarms and misses without increasing CPU or memory usage.", "title": "" }, { "docid": "b5f2717f3398a94ebeac2465dff98098", "text": "Blockchains primarily enable credible accounting of digital events, e.g., money transfers in cryptocurrencies. However, beyond this original purpose, blockchains also irrevocably record arbitrary data, ranging from short messages to pictures. This does not come without risk for users as each participant has to locally replicate the complete blockchain, particularly including potentially harmful content. We provide the first systematic analysis of the benefits and threats of arbitrary blockchain content. Our analysis shows that certain content, e.g., illegal pornography, can render the mere possession of a blockchain illegal. Based on these insights, we conduct a thorough quantitative and qualitative analysis of unintended content on Bitcoin’s blockchain. Although most data originates from benign extensions to Bitcoin’s protocol, our analysis reveals more than 1600 files on the blockchain, over 99% of which are texts or images. Among these files there is clearly objectionable content such as links to child pornography, which is distributed to all Bitcoin participants. With our analysis, we thus highlight the importance for future blockchain designs to address the possibility of unintended data insertion and protect blockchain users accordingly.", "title": "" }, { "docid": "2c6fd73e6ec0ebc0ae257676c712d024", "text": "This paper addresses the problem of spatiotemporal localization of actions in videos. Compared to leading approaches, which all learn to localize based on carefully annotated boxes on training video frames, we adhere to a weakly-supervised solution that only requires a video class label. We introduce an actor-supervised architecture that exploits the inherent compositionality of actions in terms of actor transformations, to localize actions. We make two contributions. First, we propose actor proposals derived from a detector for human and non-human actors intended for images, which is linked over time by Siamese similarity matching to account for actor deformations. Second, we propose an actor-based attention mechanism that enables the localization of the actions from action class labels and actor proposals and is end-to-end trainable. Experiments on three human and non-human action datasets show actor supervision is state-of-the-art for weakly-supervised action localization and is even competitive to some fullysupervised alternatives.", "title": "" }, { "docid": "87a256b5e67b97cf4a11b5664a150295", "text": "This paper presents a method for speech emotion recognition using spectrograms and deep convolutional neural network (CNN). Spectrograms generated from the speech signals are input to the deep CNN. The proposed model consisting of three convolutional layers and three fully connected layers extract discriminative features from spectrogram images and outputs predictions for the seven emotions. In this study, we trained the proposed model on spectrograms obtained from Berlin emotions dataset. Furthermore, we also investigated the effectiveness of transfer learning for emotions recognition using a pre-trained AlexNet model. Preliminary results indicate that the proposed approach based on freshly trained model is better than the fine-tuned model, and is capable of predicting emotions accurately and efficiently.", "title": "" }, { "docid": "f544c879f4f496a752c3c3434469bf90", "text": "Peter Eden Information Security Research group School of Computing and Mathematics Department of Computing, Engineering and Science University of South Wales Pontypridd, CF371DL UK peter.eden@southwales.ac.uk Andrew Blyth Information Security Research group School of Computing and Mathematics Department of Computing, Engineering and Science University of South Wales Pontypridd, CF371DL UK andrew.blyth@southwales.ac.uk", "title": "" }, { "docid": "ad7b715f434f3a500be8d52a047b9be1", "text": "This paper presents a quantitative analysis of data collected by an online testing system for SQL \"select\" queries. The data was collected from almost one thousand students, over eight years. We examine which types of queries our students found harder to write. The seven types of SQL queries studied are: simple queries on one table; grouping, both with and without \"having\"; natural joins; simple and correlated sub-queries; and self-joins. The order of queries in the preceding sentence reflects the order of student difficulty we see in our data.", "title": "" }, { "docid": "f16676f00cd50173d75bd61936ec200c", "text": "Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data. We propose a new model that extends this inference scheme to multiple steps, arguing that it is easier to learn to improve a reconstruction in k steps rather than to learn to reconstruct in a single inference step. The proposed model is an unsupervised building block for deep learning that combines the desirable properties of NADE and multi-prediction training: (1) Its test likelihood can be computed analytically, (2) it is easy to generate independent samples from it, and (3) it uses an inference engine that is a superset of variational inference for Boltzmann machines. The proposed NADE-k is competitive with the state-of-the-art in density estimation on the two datasets tested.", "title": "" }, { "docid": "7eb7cfc2ca574b0965008117cf7070d9", "text": "We present a framework, Atlas, which incorporates application-awareness into Software-Defined Networking (SDN), which is currently capable of L2/3/4-based policy enforcement but agnostic to higher layers. Atlas enables fine-grained, accurate and scalable application classification in SDN. It employs a machine learning (ML) based traffic classification technique, a crowd-sourcing approach to obtain ground truth data and leverages SDN's data reporting mechanism and centralized control. We prototype Atlas on HP Labs wireless networks and observe 94% accuracy on average, for top 40 Android applications.", "title": "" }, { "docid": "90ce5197708ee86f42ac8c5e985e481f", "text": "This paper proposes a method to predict fluctuations in the prices of cryptocurrencies, which are increasingly used for online transactions worldwide. Little research has been conducted on predicting fluctuations in the price and number of transactions of a variety of cryptocurrencies. Moreover, the few methods proposed to predict fluctuation in currency prices are inefficient because they fail to take into account the differences in attributes between real currencies and cryptocurrencies. This paper analyzes user comments in online cryptocurrency communities to predict fluctuations in the prices of cryptocurrencies and the number of transactions. By focusing on three cryptocurrencies, each with a large market size and user base, this paper attempts to predict such fluctuations by using a simple and efficient method.", "title": "" }, { "docid": "2ed35ae53d1d5b6a85a9ea234ecf24ec", "text": "Low back pain is a significant public health problem and one of the most commonly reported reasons for the use of Complementary Alternative Medicine. A randomized control trial was conducted in subjects with non-specific chronic low back pain comparing Iyengar yoga therapy to an educational control group. Both programs were 16 weeks long. Subjects were primarily self-referred and screened by primary care physicians for study of inclusion/exclusion criteria. The primary outcome for the study was functional disability. Secondary outcomes including present pain intensity, pain medication usage, pain-related attitudes and behaviors, and spinal range of motion were measured before and after the interventions. Subjects had low back pain for 11.2+/-1.54 years and 48% used pain medication. Overall, subjects presented with less pain and lower functional disability than subjects in other published intervention studies for chronic low back pain. Of the 60 subjects enrolled, 42 (70%) completed the study. Multivariate analyses of outcomes in the categories of medical, functional, psychological and behavioral factors indicated that significant differences between groups existed in functional and medical outcomes but not for the psychological or behavioral outcomes. Univariate analyses of medical and functional outcomes revealed significant reductions in pain intensity (64%), functional disability (77%) and pain medication usage (88%) in the yoga group at the post and 3-month follow-up assessments. These preliminary data indicate that the majority of self-referred persons with mild chronic low back pain will comply to and report improvement on medical and functional pain-related outcomes from Iyengar yoga therapy.", "title": "" }, { "docid": "850f29a1d3c5bc96bb36787aba428331", "text": "In this paper, we introduce a novel framework for WEakly supervised Learning of Deep cOnvolutional neural Networks (WELDON). Our method is dedicated to automatically selecting relevant image regions from weak annotations, e.g. global image labels, and encompasses the following contributions. Firstly, WELDON leverages recent improvements on the Multiple Instance Learning paradigm, i.e. negative evidence scoring and top instance selection. Secondly, the deep CNN is trained to optimize Average Precision, and fine-tuned on the target dataset with efficient computations due to convolutional feature sharing. A thorough experimental validation shows that WELDON outperforms state-of-the-art results on six different datasets.", "title": "" }, { "docid": "9444d244964ba6cc679a298efbf39cc9", "text": "STREAMSCOPE (or STREAMS) is a reliable distributed stream computation engine that has been deployed in shared 20,000-server production clusters at Microsoft. STREAMS provides a continuous temporal stream model that allows users to express complex stream processing logic naturally and declaratively. STREAMS supports business-critical streaming applications that can process tens of billions (or tens of terabytes) of input events per day continuously with complex logic involving tens of temporal joins, aggregations, and sophisticated userdefined functions, while maintaining tens of terabytes inmemory computation states on thousands of machines. STREAMS introduces two abstractions, rVertex and rStream, to manage the complexity in distributed stream computation systems. The abstractions allow efficient and flexible distributed execution and failure recovery, make it easy to reason about correctness even with failures, and facilitate the development, debugging, and deployment of complex multi-stage streaming applications.", "title": "" }, { "docid": "c340cbb5f6b062caeed570dc2329e482", "text": "We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the \"address-event representation\" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.", "title": "" }, { "docid": "3500278940baaf6f510ad47463cbf5ed", "text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.", "title": "" }, { "docid": "cb6354591bbcf130beea46701ae0e59f", "text": "Requirements engineering process is a human endeavor. People who hold a stake in a project are involved in the requirements engineering process. They are from different backgrounds and with different organizational and individual goals, social positions, and personalities. They have different ways to understand and express the knowledge, and communicate with others. The requirements development processes, therefore, vary widely depending on the people involved. In order to acquire quality requirements from different people, a large number of methods exit. However, because of the inadequate understanding about methods and the variability of the situations in which requirements are developed, it is difficult for organizations to identify a set of appropriate methods to develop requirements in a structured and systematic way. The insufficient requirements engineering process forms one important factor that cause the failure of an IT project [29].", "title": "" }, { "docid": "1406e692dc31cd4f89ea9a5441b84691", "text": "2004 Recent advancements in Field Programmable Gate Array (FPGA) technology have resulted in FPGA devices that support the implementation of a complete computer system on a single FPGA chip. A soft-core processor is a central component of such a system. A soft-core processor is a microprocessor defined in software, which can be synthesized in programmable hardware, such as FPGAs. The Nios soft-core processor from Altera Corporation is studied and a Verilog implementation of the Nios soft-core processor has been developed, called UT Nios. The UT Nios is described, its performance dependence on various architectural parameters is investigated and then compared to the original implementation from Altera. Experiments show that the performance of different types of applications varies significantly depending on the architectural parameters. The performance comparison shows that UT Nios achieves performance comparable to the original implementation. Finally, the design methodology, experiences from the design process and issues encountered are discussed. iii Acknowledgments", "title": "" }, { "docid": "4b2e6f5a0ce30428377df72d8350d637", "text": "Sentence matching is widely used in various natural language tasks such as natural language inference, paraphrase identification, and question answering. For these tasks, understanding logical and semantic relationship between two sentences is required but it is yet challenging. Although attention mechanism is useful to capture the semantic relationship and to properly align the elements of two sentences, previous methods of attention mechanism simply use a summation operation which does not retain original features enough. Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers. It enables preserving the original and the co-attentive feature information from the bottommost word embedding layer to the uppermost recurrent layer. To alleviate the problem of an ever-increasing size of feature vectors due to dense concatenation operations, we also propose to use an autoencoder after dense concatenation. We evaluate our proposed architecture on highly competitive benchmark datasets related to sentence matching. Experimental results show that our architecture, which retains recurrent and attentive features, achieves state-of-the-art performances for most of the tasks.", "title": "" }, { "docid": "9a925106f3cdf95ec08b7bf53cbb526f", "text": "High-utility itemset mining (HUIM) is an emerging area of data mining and is widely used. HUIM differs from the frequent itemset mining (FIM), as the latter considers only the frequency factor, whereas the former has been designed to address both quantity and profit factors to reveal the most profitable products. The challenges of generating the HUI include exponential complexity in both time and space. Moreover, the pruning techniques of reducing the search space, which is available in FIM because of their monotonic and anti-monotonic properties, cannot be used in HUIM. In this paper, we propose a novel selective database projection-based HUI mining algorithm (SPHUI-Miner). We introduce an efficient data format, named HUI-RTPL, which is an optimum and compact representation of data requiring low memory. We also propose two novel data structures, viz, selective database projection utility list and Tail-Count list to prune the search space for HUI mining. Selective projections of the database reduce the scanning time of the database making our proposed approach more efficient. It creates unique data instances and new projections for data having less dimensions thereby resulting in faster HUI mining. We also prove upper bounds on the amount of memory consumed by these projections. Experimental comparisons on various benchmark data sets show that the SPHUI-Miner algorithm outperforms the state-of-the-art algorithms in terms of computation time, memory usage, scalability, and candidates generation.", "title": "" }, { "docid": "bb74cbb76c6efb4a030d2c5653e18842", "text": "Two new wideband in-phase and out-of-phase balanced power dividing/combining networks are proposed in this paper. Based on matrix transformation, the differential-mode and common-mode equivalent circuits of the two wideband in-phase and out-of-phase networks can be easily deduced. A patterned ground-plane technique is used to realize the strong coupling of the shorted coupled lines for the differential mode. Two planar wideband in-phase and out-of-phase balanced networks with bandwidths of 55.3% and 64.4% for the differential mode with wideband common-mode suppression are designed and fabricated. The theoretical and measured results agree well with each other and show good in-band performances.", "title": "" } ]
scidocsrr
5d9f7ff852ad33e10fdfdaab4277733e
Collaborative Filtering With User-Item Co-Autoregressive Models
[ { "docid": "d0bf246feac1b5e6924719b5b7c76189", "text": "This paper proposes implicit CF-NADE, a neural autoregressive model for collaborative filtering tasks using implicit feedback( e.g. click/watch/browse behaviors). We first convert a user's implicit feedback into a \"like\" vector and a confidence vector, and then model the probability of the \"like\" vector, weighted by the confidence vector. The training objective of implicit CF-NADE is to maximize a weighted negative log-likelihood. We test the performance of implicit CF-NADE on a dataset collected from a popular digital TV streaming service. More specifically, in the experiments, we describe how to convert watch counts into implicit \"relative rating\", and feed into implicit CF-NADE. Then we compare the performance of implicit CF-NADE model with the popular implicit matrix factorization approach. Experimental results show that implicit CF-NADE significantly outperforms the baseline.", "title": "" }, { "docid": "fd03cf7e243571e9b3e81213fe91fd29", "text": "Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.", "title": "" } ]
[ { "docid": "9f322a4845dc167db1eb71d171b1d7bd", "text": "To act autonomously and robustly in complex environments, mobile robots require task planning to adapt to the given situation. With the growing capabilities of robot hardware and software this is becoming increasingly important. Yet, only few integrated robotic systems exist. This thesis attempts to make a next step towards robust plan-based robot control. For this, it first describes how the HTN-Planner SHOP2 [117] can be used in a robot control architecture that aims to improve its performance based on experiences. Based on that, means to increase the systems robustness by integrating the planner with other components are discussed. The environments and tasks of mobile robots contain various forms of knowledge like information about temporal requirements or resources that should be reasoned about in order to achieve a safe and robust behaviour. However, this cannot be fully used by planners like SHOP2. Therefore, this thesis presents the hierarchical hybrid planner CHIMP which combines the advantages of hierarchical planning and hybrid planning with different forms of knowledge as a MetaCSP [28, 104]. Furthermore, CHIMP’s plans can contain actions that can be executed in parallel, and additional goal tasks can be inserted into an existing plan during its execution.", "title": "" }, { "docid": "f327daaa356b7580c6da5ae43f125f87", "text": "A double-periodic array of pairs of parallel gold nanorods is shown to have a negative refractive index in the optical range. Such behavior results from the plasmon resonance in the pairs of nanorods for both the electric and the magnetic components of light. The refractive index is retrieved from direct phase and amplitude measurements for transmission and reflection, which are all in excellent agreement with simulations. Both experiments and simulations demonstrate that a negative refractive index n' approximately -0.3 is achieved at the optical communication wavelength of 1.5 microm using the array of nanorods. The retrieved refractive index critically depends on the phase of the transmitted wave, which emphasizes the importance of phase measurements in finding n'.", "title": "" }, { "docid": "e4e2b2e63fea47ff63348893d409f7c2", "text": "* This paper is the result of work being undertaken as part of a collaborative research program entitled ‘The Performance of Australian Enterprises: Innovation, Productivity and Profitability’. The project is generously supported by the Australian Research Council and the following collaborative partners: Australian Tax Office, Commonwealth Office of Small Business, IBIS Business Information Pty Ltd, Productivity Commission, and Victorian Department of State Development. The views expressed in this paper represent those of the authors and not necessarily the views of the collaborative partners. The author wishes to acknowledge helpful comments from Rob Phillips on an earlier draft of this paper.", "title": "" }, { "docid": "a58960142ea8d849a4a3c5150168c590", "text": "Deep networks are highly nonlinear and difficult to optimize. During training, the parameter iterate may move from one local basin to another, or the data distribution may even change. Inspired by the close connection between stochastic optimization and online learning, we propose a variant of the follow the regularized leader (FTRL) algorithm called follow the moving leader (FTML). Unlike the FTRL family of algorithms, the recent samples are weighted more heavily in each iteration and so FTML can adapt more quickly to changes. We show that FTML enjoys the nice properties of RMSprop and Adam, while avoiding their pitfalls. Experimental results on a number of deep learning models and tasks demonstrate that FTML converges quickly, and outperforms other state-ofthe-art optimizers.", "title": "" }, { "docid": "9ca90172c5beff5922b4f5274ef61480", "text": "In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep-learning ecosystem to provide a tunable balance between performance, power consumption, and programmability. In this article, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics, which include the supported applications, architectural choices, design space exploration methods, and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete, and in-depth evaluation of CNN-to-FPGA toolflows.", "title": "" }, { "docid": "39c1f5ac009a1100df0979a544d98346", "text": "A phishing email is a legitimate-looking email which is designed to fool the recipient into believing that it is a genuine email, and either reveals sensitive information or downloads malicious software through clicking on malicious links contained in the body of the email. Given that phishing emails cost UK consumers £174m in 2015, this paper proposal is driven by a problem whose resolution will have a great impact on people's lives in the UK and in the world. In this paper, we proposed a Neural Network (NN)-based model for detections and classifications of phishing emails using publically available email datasets for both benign and phishing emails. The results of the experiments are presented in order to demonstrate the effectiveness of the model in terms of accuracy, true-positive rate, false-positive rate, network performance and error histogram.", "title": "" }, { "docid": "b2261ae40cb837da9aca69916590a7b2", "text": "Since many languages originated from a common ancestral language and influence each other, there would inevitably exist similarities between these languages such as lexical similarity and named entity similarity. In this paper, we leverage these similarities to improve the translation performance in neural machine translation. Specifically, we introduce an attention-via-attention mechanism that allows the information of source-side characters flowing to the target side directly. With this mechanism, the target-side characters will be generated based on the representation of source-side characters when the words are similar. For instance, our proposed neural machine translation system learns to transfer the characterlevel information of the English word ‘system’ through the attention-via-attention mechanism to generate the Czech word ‘systém’. Consequently, our approach is able to not only achieve a competitive translation performance, but also reduce the model size significantly.", "title": "" }, { "docid": "4f343f38b83dcbbab639997bb3db2df4", "text": "Memory bandwidth has become a major performance bottleneck as more and more cores are integrated onto a single die, demanding more and more data from the system memory. Several prior studies have demonstrated that this memory bandwidth problem can be addressed by employing a 3D-stacked memory architecture, which provides a wide, high frequency memory-bus interface. Although previous 3D proposals already provide as much bandwidth as a traditional L2 cache can consume, the dense through-silicon-vias (TSVs) of 3D chip stacks can provide still more bandwidth. In this paper, we contest that we need to re-architect our memory hierarchy, including the L2 cache and DRAM interface, so that it can take full advantage of this massive bandwidth. Our technique, SMART-3D, is a new 3D-stacked memory architecture with a vertical L2 fetch/write-back network using a large array of TSVs. Simply stated, we leverage the TSV bandwidth to hide latency behind very large data transfers. We analyze the design trade-offs for the DRAM arrays, careful enough to avoid compromising the DRAM density because of TSV placement. Moreover, we propose an efficient mechanism to manage the false sharing problem when implementing SMART-3D in a multi-socket system. For single-threaded memory-intensive applications, the SMART-3D architecture achieves speedups from 1.53 to 2.14 over planar designs and from 1.27 to 1.72 over prior 3D designs. We achieve similar speedups for multi-program and multi-threaded workloads on multi-core and multi-socket processors. Furthermore, SMART-3D can even lower the energy consumption in the L2 cache and 3D DRAM for it reduces the total number of row buffer misses.", "title": "" }, { "docid": "53e7c26ce6abc85d721b2f1661d1c3c0", "text": "For the detail mapping there are multiple methods that can be used. In Battlefield 2, a 256 m patch of the terrain could have up to six different tiling detail maps that were blended together using one or two three-component unique detail mask textures (Figure 4) that controlled the visibility of the individual detail maps. Artists would paint or generate the detail masks just as for the color map.", "title": "" }, { "docid": "9955e5a03700d432098be9118faebe61", "text": "Enterprise Resource Planning (ERP) systems are integrated, enterprise-wide systems that provide automated support for standard business processes within organisations. They have been adopted by organisations throughout the world with varying degrees of success. Implementing ERP systems is a complex, lengthy and expensive process. In this paper we synthesise an ERP systems implementation process model and a set of critical success factors for ERP systems implementation. Two case studies of ERP systems implementation, one in Australia and one in China are reported. The case studies identify which critical success factors are important in which process model phases. Case study analysis then explains the differences between the Australian and Chinese cases using national cultural characteristics. Outcomes of the research are important for multinational organisations implementing ERP systems and for consulting companies assisting with ERP systems implementation in different countries.", "title": "" }, { "docid": "9d9665a21e5126ba98add5a832521cd1", "text": "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Few studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and to score, and less prone to overfitting.", "title": "" }, { "docid": "cbdbe103bcc85f76f9e6ac09eed8ea4c", "text": "Using the evidence collection and analysis methodology for Android devices proposed by Martini, Do and Choo (2015), we examined and analyzed seven popular Android cloud-based apps. Firstly, we analyzed each app in order to see what information could be obtained from their private app storage and SD card directories. We collated the information and used it to aid our investigation of each app’s database files and AccountManager data. To complete our understanding of the forensic artefacts stored by apps we analyzed, we performed further analysis on the apps to determine if the user’s authentication credentials could be collected for each app based on the information gained in the initial analysis stages. The contributions of this research include a detailed description of artefacts, which are of general forensic interest, for each app analyzed.", "title": "" }, { "docid": "6480f98a792ca9cdb961e85357a73461", "text": "Since its first use in the steroid field in the late 1950s, the use of fluorine in medicinal chemistry has become commonplace, with the small electronegative fluorine atom being a key part of the medicinal chemist's repertoire of substitutions used to modulate all aspects of molecular properties including potency, physical chemistry and pharmacokinetics. This review will highlight the special nature of fluorine, drawing from a survey of marketed fluorinated pharmaceuticals and the medicinal chemistry literature, to illustrate key concepts exploited by medicinal chemists in their attempts to optimize drug molecules. Some of the potential pitfalls in the use of fluorine will also be highlighted.", "title": "" }, { "docid": "e1b69d4f2342a90b52215927f727421b", "text": "We present an inertial sensor based monitoring system for measuring upper limb movements in real time. The purpose of this study is to develop a motion tracking device that can be integrated within a home-based rehabilitation system for stroke patients. Human upper limbs are represented by a kinematic chain in which there are four joint variables to be considered: three for the shoulder joint and one for the elbow joint. Kinematic models are built to estimate upper limb motion in 3-D, based on the inertial measurements of the wrist motion. An efficient simulated annealing optimisation method is proposed to reduce errors in estimates. Experimental results demonstrate the proposed system has less than 5% errors in most motion manners, compared to a standard motion tracker.", "title": "" }, { "docid": "5ebf60a0f113ec60c4f9f3c2089e86cb", "text": "A rapidly burgeoning literature documents copious sex influences on brain anatomy, chemistry and function. This article highlights some of the more intriguing recent discoveries and their implications. Consideration of the effects of sex can help to explain seemingly contradictory findings. Research into sex influences is mandatory to fully understand a host of brain disorders with sex differences in their incidence and/or nature. The striking quantity and diversity of sex-related influences on brain function indicate that the still widespread assumption that sex influences are negligible cannot be justified, and probably retards progress in our field.", "title": "" }, { "docid": "b4a2c3679fe2490a29617c6a158b9dbc", "text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.", "title": "" }, { "docid": "bfdc5925a540686d03b6314bf2009db3", "text": "This paper describes our programmable analog technology based around floating-gate transistors that allow for non-volatile storage as well as computation through the same device. We describe the basic concepts for floating-gate devices, capacitor-based circuits, and the basic charge modification mechanisms that makes this analog technology programmable. We describe the techniques to extend these techniques to program an nonhomogenious array of floating-gate devices.", "title": "" }, { "docid": "87ff81e6c98e4d9df1b7c6420b502dfd", "text": "The field of visualization is getting mature. Many problems have been solved, and new directions are sought for. In order to make good choices, an understanding of the purpose and meaning of visualization is needed. Especially, it would be nice if we could assess what a good visualization is. In this paper an attempt is made to determine the value of visualization. A technological viewpoint is adopted, where the value of visualization is measured based on effectiveness and efficiency. An economic model of visualization is presented, and benefits and costs are established. Next, consequences (brand limitations of visualization are discussed (including the use of alternative methods, high initial costs, subjective/less, and the role of interaction), as well as examples of the use of the model for the judgement of existing classes of methods and understanding why they are or are not used in practice. Furthermore, two alternative views on visualization are presented and discussed: viewing visualization as an art or as a scientific discipline. Implications and future directions are identified.", "title": "" }, { "docid": "538ad3f32bbf333d73e619efc8ab4e9c", "text": "In order to learn effective control policies for dynamical systems, policy search methods must be able to discover successful executions of the desired task. While random exploration can work well in simple domains, complex and highdimensional tasks present a serious challenge, particularly when combined with high-dimensional policies that make parameter-space exploration infeasible. We present a method that uses trajectory optimization as a powerful exploration strategy that guides the policy search. A variational decomposition of a maximum likelihood policy objective allows us to use standard trajectory optimization algorithms such as differential dynamic programming, interleaved with standard supervised learning for the policy itself. We demonstrate that the resulting algorithm can outperform prior methods on two challenging locomotion tasks.", "title": "" } ]
scidocsrr
d86073c9ca03a44701e4d1f2d74d60f6
Content-based organization and visualization of music archives
[ { "docid": "90dececdeb4747ccfd87f75da6d53692", "text": "Much of the work on perception and understanding of music by computers has focused on low-level perceptual features such as pitch and tempo. Our work demonstrates that machine learning can be used to build e ective style classi ers for interactive performance systems. We also present an analysis explaining why these techniques work so well when hand-coded approaches have consistently failed. We also describe a reliable real-time performance style classi er.", "title": "" } ]
[ { "docid": "0a4a124589dffca733fa9fa87dc94b35", "text": "where ri is the reward in cycle i of a given history, and the expected value is taken over all possible interaction histories of π and μ. The choice of γi is a subtle issue that controls how greedy or far sighted the agent should be. Here we use the near-harmonic γi := 1/i2 as this produces an agent with increasing farsightedness of the order of its current age [Hutter2004]. As we desire an extremely general definition of intelligence for arbitrary systems, our space of environments should be as large as possible. An obvious choice is the space of all probability measures, however this causes serious problems as we cannot even describe some of these measures in a finite way.", "title": "" }, { "docid": "8272f6d511cc8aa104ba10c23deb17a5", "text": "The challenge of developing facial recognition systems has been the focus of many research efforts in recent years and has numerous applications in areas such as security, entertainment, and biometrics. Recently, most progress in this field has come from training very deep neural networks on massive datasets which is computationally intensive and time consuming. Here, we propose a deep transfer learning (DTL) approach that integrates transfer learning techniques and convolutional neural networks and apply it to the problem of facial recognition to fine-tune facial recognition models. Transfer learning can allow for the training of robust, high-performance machine learning models that require much less time and resources to produce than similarly performing models that have been trained from scratch. Using a pre-trained face recognition model, we were able to perform transfer learning to produce a network that is capable of making accurate predictions on much smaller datasets. We also compare our results with results produced by a selection of classical algorithms on the same datasets to demonstrate the effectiveness of the proposed DTL approach.", "title": "" }, { "docid": "6e675e8a57574daf83ab78cea25688f5", "text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore “unsupervised” approaches to quality prediction that does not require labelled data. An alternate technique is to use “supervised” approaches that learn models from project data labelled with, say, “defective” or “not-defective”. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSE’16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.", "title": "" }, { "docid": "57752057b1665cec9433aa3fe055be1e", "text": "BACKGROUND\nPlacebo treatment can significantly influence subjective symptoms. However, it is widely believed that response to placebo requires concealment or deception. We tested whether open-label placebo (non-deceptive and non-concealed administration) is superior to a no-treatment control with matched patient-provider interactions in the treatment of irritable bowel syndrome (IBS).\n\n\nMETHODS\nTwo-group, randomized, controlled three week trial (August 2009-April 2010) conducted at a single academic center, involving 80 primarily female (70%) patients, mean age 47 ± 18 with IBS diagnosed by Rome III criteria and with a score ≥ 150 on the IBS Symptom Severity Scale (IBS-SSS). Patients were randomized to either open-label placebo pills presented as \"placebo pills made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processes\" or no-treatment controls with the same quality of interaction with providers. The primary outcome was IBS Global Improvement Scale (IBS-GIS). Secondary measures were IBS Symptom Severity Scale (IBS-SSS), IBS Adequate Relief (IBS-AR) and IBS Quality of Life (IBS-QoL).\n\n\nFINDINGS\nOpen-label placebo produced significantly higher mean (±SD) global improvement scores (IBS-GIS) at both 11-day midpoint (5.2 ± 1.0 vs. 4.0 ± 1.1, p<.001) and at 21-day endpoint (5.0 ± 1.5 vs. 3.9 ± 1.3, p = .002). Significant results were also observed at both time points for reduced symptom severity (IBS-SSS, p = .008 and p = .03) and adequate relief (IBS-AR, p = .02 and p = .03); and a trend favoring open-label placebo was observed for quality of life (IBS-QoL) at the 21-day endpoint (p = .08).\n\n\nCONCLUSION\nPlacebos administered without deception may be an effective treatment for IBS. Further research is warranted in IBS, and perhaps other conditions, to elucidate whether physicians can benefit patients using placebos consistent with informed consent.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT01010191.", "title": "" }, { "docid": "116463e16452d6847c94f662a90ac2ef", "text": "The ubiquity of mobile devices with global positioning functionality (e.g., GPS and AGPS) and Internet connectivity (e.g., 3G andWi-Fi) has resulted in widespread development of location-based services (LBS). Typical examples of LBS include local business search, e-marketing, social networking, and automotive traffic monitoring. Although LBS provide valuable services for mobile users, revealing their private locations to potentially untrusted LBS service providers pose privacy concerns. In general, there are two types of LBS, namely, snapshot and continuous LBS. For snapshot LBS, a mobile user only needs to report its current location to a service provider once to get its desired information. On the other hand, a mobile user has to report its location to a service provider in a periodic or on-demand manner to obtain its desired continuous LBS. Protecting user location privacy for continuous LBS is more challenging than snapshot LBS because adversaries may use the spatial and temporal correlations in the user's location samples to infer the user's location information with higher certainty. Such user location trajectories are also very important for many applications, e.g., business analysis, city planning, and intelligent transportation. However, publishing such location trajectories to the public or a third party for data analysis could pose serious privacy concerns. Privacy protection in continuous LBS and trajectory data publication has increasingly drawn attention from the research community and industry. In this survey, we give an overview of the state-of-the-art privacy-preserving techniques in these two problems.", "title": "" }, { "docid": "bf5d53e5465dd5e64385bf9204324059", "text": "A model of core losses, in which the hysteresis coefficients are variable with the frequency and induction (flux density) and the eddy-current and excess loss coefficients are variable only with the induction, is proposed. A procedure for identifying the model coefficients from multifrequency Epstein tests is described, and examples are provided for three typical grades of non-grain-oriented laminated steel suitable for electric motor manufacturing. Over a wide range of frequencies between 20-400 Hz and inductions from 0.05 to 2 T, the new model yielded much lower errors for the specific core losses than conventional models. The applicability of the model for electric machine analysis is also discussed, and examples from an interior permanent-magnet and an induction motor are included.", "title": "" }, { "docid": "0506a7f5dddf874487c90025dff0bc7d", "text": "This paper presents a low-power decision-feedback equalizer (DFE) receiver front-end and a two-step minimum bit-error-rate (BER) adaptation algorithm. A high energy efficiency of 0.46 mW/Gbps is made possible by the combination of a direct-feedback finite-impulse-response (FIR) DFE, an infinite-impulse-response (IIR) DFE, and a clock-and-data recovery (CDR) circuit with adjustable timing offsets. Based on this architecture, the power-hungry stages used in prior DFE receivers such as the continuous-time linear equalizer (CTLE), the current-mode summing circuit for a multitap DFE, and the fast selection logic for a loop-unrolling DFE can all be removed. A two-step adaptation algorithm that finds the equalizer coefficients minimizing the BER is described. First, an extra data sampler with adjustable voltage and timing offsets measures the single-bit response (SBR) of the channel and coarsely tunes the initial coefficient values in the foreground. Next, the same circuit measures the eye-opening and bit-error rates and fine tunes the coefficients in background using a stochastic hill-climbing algorithm. A prototype DFE receiver fabricated in a 65-nm LP/RF CMOS dissipates 2.3 mW and demonstrates measured eye-opening values of 174 mV pp and 0.66 UIpp while operating at 5 Gb/s with a -15-dB loss channel.", "title": "" }, { "docid": "3dfdc8abe03dd77730fe485f07588f43", "text": "Background\nThe most common neurodegenerative disease is dementia. Family of dementia patients says that their lives have been changed extensively after happening of dementia to their patients. One of the problems of family and caregivers is depression of the caregiver. In this study, we aimed to find the prevalence of depression and factors can affect depression in the dementia caregivers.\n\n\nMaterials and Methods\nThis study was cross-sectional study with convenient sampling method. Our society was 96 main caregivers of dementia patients in the year 2015 in Iran. We had two questionnaires, a demographic and Beck Depression Inventory (BDI). BDI Cronbach's alpha is 0.86 for psychiatric patients and 0.81 for nonpsychiatric persons, and Beck's scores are between 0 and 64. We used SPSS version 22 for statistical analysis.\n\n\nResults\nAccording to Beck depression test, 69.8% (n = 67 out of 96) of all caregivers had scores in the range of depression. In bivariate analysis, we found higher dementia severity and lower support of other family members from the caregiver can predict higher depression in the caregiver. As well, in regression analysis using GLM model, we found higher age and lower educational level of the caregiver can predict higher depression in the caregiver. Moreover, regression analysis approved findings about severity and support of other family members in bivariate analysis.\n\n\nConclusion\nHigh-level depression is found in caregivers of dementia patients. It needs special attention from healthcare managers, clinicians and all of health-care personnel who deals with dementia patients and their caregivers.", "title": "" }, { "docid": "70c75c5456563a80276d883d8a5241b3", "text": "One of the main problems in underwater communications is the low data rate available due to the use of low frequencies. Moreover, there are many problems inherent to the medium such as reflections, refraction, energy dispersion, etc., that greatly degrade communication between devices. In some cases, wireless sensors must be placed quite close to each other in order to take more accurate measurements from the water while having high communication bandwidth. In these cases, while most researchers focus their efforts on increasing the data rate for low frequencies, we propose the use of the 2.4 GHz ISM frequency band in these special cases. In this paper, we show our wireless sensor node deployment and its performance obtained from a real scenario and measures taken for different frequencies, modulations and data transfer rates. The performed tests show the maximum distance between sensors, the number of lost packets and the average round trip time. Based on our measurements, we provide some experimental models of underwater communication in fresh water using EM waves in the 2.4 GHz ISM frequency band. Finally, we compare our communication system proposal with the existing systems. Although our proposal provides short communication distances, it provides high data transfer rates. It can be used for precision monitoring in applications such as contaminated ecosystems or for device communicate at high depth.", "title": "" }, { "docid": "b34beab849a50ff04a948f277643fb74", "text": "To cite: Hirai T, Koster M. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/ bcr-2013-009759 DESCRIPTION A 22-year-old man with a history of intravenous heroin misuse, presented with 1 week of fatigue and fever. Blood cultures were positive for methicillin-sensitive Staphylococcus aureus. Physical examination showed multiple painful 1– 2 mm macular rashes on the palm and soles bilaterally (figures 1 and 2). Splinter haemorrhages (figure 3) and conjunctival petechiae (figure 4) were also noted. A transoesophageal echocardiogram demonstrated a 16-mm vegetation on the mitral valve (figure 5). Vegitations >10 mm in diameter and infection involving the mitral valve are independently associated with an increased risk of embolisation. However, he decided medical management after extensive discussion and was treated with intravenous nafcillin for 6 weeks. He returned 8 weeks later with acute shortness of breath and evidence of a perforated mitral valve for which he subsequently underwent a successful mitral valve repair with an uneventful recovery.", "title": "" }, { "docid": "ce096e9ee74932e0e0d04d2638f54d2a", "text": "The Internet of Things is one of the most promising technological developments in information technology. It promises huge financial and nonfinancial benefits across supply chains, in product life cycle and customer relationship applications as well as in smart environments. However, the adoption process of the Internet of Things has been slower than expected. One of the main reasons for this is the missing profitability for each individual stakeholder. Costs and benefits are not equally distributed. Cost benefit sharing models have been proposed to overcome this problem and to enable new areas of application. However, these cost benefit sharing approaches are complex, time consuming, and have failed to achieve broad usage. In this chapter, an alternative concept, suggesting flexible pricing and trading of information, is proposed. On the basis of a beverage supply chain scenario, a prototype installation, based on an open source billing solution and the Electronic Product Code Information Service (EPCIS), is shown as a proof of concept and an introduction to different pricing options. This approach allows a more flexible and scalable solution for cost benefit sharing and may enable new business models for the Internet of Things. University of Bremen, Planning and Control of Production Systems, Germany", "title": "" }, { "docid": "de298bb631dd0ca515c161b6e6426a85", "text": "We address the problem of sharpness enhancement of images. Existing hierarchical techniques that decompose an image into a smooth image and high frequency components based on Gaussian filter and bilateral filter suffer from halo effects, whereas techniques based on weighted least squares extract low contrast features as detail. Other techniques require multiple images and are not tolerant to noise.", "title": "" }, { "docid": "01a7f149ce8dbc50977010da1b181bac", "text": "Stahl's ear is a rare congenital anomaly difficult to correct surgically. This report presents the experience of the Division of Plastic Surgery, University of São Paulo Medical School for managing this anomaly. From January 1994 to September 1999, 15 patients underwent surgery (17 ears). Ages ranged from 7 to 22 (mean 15 years). Six patients were female. Four subjects were of Oriental descent, four were Negroes, and seven were Caucasians. Deformities included two bilateral, four on the left ear, and nine on the right one. Different methods were utilized for correction depending on the elasticity of the cartilage. In the presence of an elastic cartilage, sutures only are employed, otherwise the cartilage is repositioned, as described by Sugino et al. No standard characteristics were noted regarding gender or race, however, there were more unilateral cases and more on the right ear. The results were adequately satisfactory, with the two methods enabling us to recommend these surgical techniques for correction of Stahl's ear.", "title": "" }, { "docid": "47b18dfccf44bdc516af97c857ebe0f7", "text": "Therapeutic drug monitoring (TDM) is a procedure in which the levels of drugs are assayed in various body fluids with the aim of individualizing the dose of critical drugs, such as cyclosporine A. Cyclosporine A assays are performed in blood. We proposed the use of the Takagi and Sugeno-type “adaptive-network-based fuzzy inference system” (ANFIS) to predict the concentration of cyclosporine A in blood samples taken from renal transplantation patients. We implemented the ANFIS model using TDM data collected from 138 patients and 20 input parameters. Input parameters for the model consisted of concurrent use of drugs, blood levels, sampling time, age, gender, and dosing intervals. Fuzzy modeling produced eight rules. The developed ANFIS model exhibited a root mean square error (RMSE) of 0.045 with respect to the training data and an error of 0.057 with respect to the checking data in the MATLAB environment. ANFIS can effectively assist physicians in choosing best therapeutic drug dose in the clinical setting.", "title": "" }, { "docid": "6ca4d0021c11906bae4dbd5db9b47c80", "text": "Writing code to interact with external devices is inherently difficult, and the added demands of writing device drivers in C for kernel mode compounds the problem. This environment is complex and brittle, leading to increased development costs and, in many cases, unreliable code. Previous solutions to this problem ignore the cost of migrating drivers to a better programming environment and require writing new drivers from scratch or even adopting a new operating system. We present Decaf Drivers, a system for incrementally converting existing Linux kernel drivers to Java programs in user mode. With support from programanalysis tools, Decaf separates out performance-sensitive code and generates a customized kernel interface that allows the remaining code to be moved to Java. With this interface, a programmer can incrementally convert driver code in C to a Java decaf driver. The Decaf Drivers system achieves performance close to native kernel drivers and requires almost no changes to the Linux kernel. Thus, Decaf Drivers enables driver programming to advance into the era of modern programming languages without requiring a complete rewrite of operating systems or drivers. With five drivers converted to Java, we show that Decaf Drivers can (1) move the majority of a driver’s code out of the kernel, (2) reduce the amount of driver code, (3) detect broken error handling at compile time with exceptions, (4) gracefully evolve as driver and kernel code and data structures change, and (5) perform within one percent of native kernel-only drivers.", "title": "" }, { "docid": "4ce9574ef4d2ca03fa7cc1bc05f306d5", "text": "A social recommendation system has attracted a lot of attention recently in the research communities of information retrieval, machine learning, and data mining. Traditional social recommendation algorithms are often based on batch machine learning methods which suffer from several critical limitations, e.g., extremely expensive model retraining cost whenever new user ratings arrive, unable to capture the change of user preferences over time. Therefore, it is important to make social recommendation system suitable for real-world online applications where data often arrives sequentially and user preferences may change dynamically and rapidly. In this paper, we present a new framework of online social recommendation from the viewpoint of online graph regularized user preference learning (OGRPL), which incorporates both collaborative user-item relationship as well as item content features into an unified preference learning process. We further develop an efficient iterative procedure, OGRPL-FW which utilizes the Frank-Wolfe algorithm, to solve the proposed online optimization problem. We conduct extensive experiments on several large-scale datasets, in which the encouraging results demonstrate that the proposed algorithms obtain significantly lower errors (in terms of both RMSE and MAE) than the state-of-the-art online recommendation methods when receiving the same amount of training data in the online learning process.", "title": "" }, { "docid": "ec1f47a6ca0edd2334fc416d29ce02ea", "text": "We present Synereo, a next-gen decentralized and distributed social network designed for an attention economy. Our presentation is given in two chapters. Chapter 1 presents our design philosophy. Our goal is to make our users more effective agents by presenting social content that is relevant and actionable based on the user’s own estimation of value. We discuss the relationship between attention, value, and social agency in order to motivate the central mechanisms for content flow on the network. Chapter 2 defines a network model showing the mechanics of the network interactions, as well as the compensation model enabling users to promote content on the network and receive compensation for attention given to the network. We discuss the high-level technical implementation of these concepts based on the π-calculus the most well known of a family of computational formalisms known as the mobile process calculi. 0.1 Prologue: This is not a manifesto The Internet is overflowing with social network manifestos. Ello has a manifesto. Tsu has a manifesto. SocialSwarm has a manifesto. Even Disaspora had a manifesto. Each one of them is written in earnest with clear intent (see figure 1). Figure 1: Ello manifesto The proliferation of these manifestos and the social networks they advertise represents an important market shift, one that needs to be understood in context. The shift from mainstream media to social media was all about “user generated content”. In other words, people took control of the content by making it for and distributing it to each other. In some real sense it was a remarkable expansion of the shift from glamrock to punk and DIY; and like that movement, it was the sense of people having a say in what impressions they received that has been the underpinning of the success of Facebook and Twitter and YouTube and the other social media giants. In the wake of that shift, though, we’ve seen that even when the people are producing the content, if the service is in somebody else’s hands then things still go wonky: the service providers run psychology experiments via the social feeds [1]; they sell people’s personally identifiable and other critical info [2]; and they give data to spooks [3]. Most importantly, they do this without any real consent of their users. With this new wave of services people are expressing a desire to take more control of the service, itself. When the service is distributed, as is the case with Splicious and Diaspora, it is truly cooperative. And, just as with the music industry, where the technology has reached the point that just about anybody can have a professional studio in their home, the same is true with media services. People are recognizing that we don’t need big data centers with massive environmental impact, we need engagement at the level of the service, itself. If this really is the underlying requirement the market is articulating, then there is something missing from a social network that primarily serves up a manifesto with their service. While each of the networks mentioned above constitutes an important step in the right direction, they lack any clear indication", "title": "" }, { "docid": "7b4e9043e11d93d8152294f410390f6d", "text": "In this paper, we present a series of methods to authenticate a user with a graphical password. To that end, we employ the user¿s personal handheld device as the password decoder and the second factor of authentication. In our methods, a service provider challenges the user with an image password. To determine the appropriate click points and their order, the user needs some hint information transmitted only to her handheld device. We show that our method can overcome threats such as key-loggers, weak password, and shoulder surfing. With the increasing popularity of handheld devices such as cell phones, our approach can be leveraged by many organizations without forcing the user to memorize different passwords or carrying around different tokens.", "title": "" }, { "docid": "36380f539bb75e564a7a2377b9fab789", "text": "It is important to be able to program GUI applications in a fast and easy manner. Current GUI tools for creating visually attractive applications offer limited functionality. In this paper we introduce a new, easy to use method to program GUI applications in a pure functional language such as Clean or Generic Haskell. The method we use is a refined version of the model-view paradigm. The basic component in our approach is the Graphical Editor Component (GECτ ) that can contain any value of any flat data type τ and that can be freely used to display and edit its value. GECτ s can depend on others, but also on themselves. They can even be mutually dependent. With these components we can construct a flexible, reusable and customizable editor. For the realization of the components we had to invent a new generic implementation technique for interactive applications.", "title": "" }, { "docid": "146ebf4801c4ca47e8a2c36c79962b99", "text": "The thymus in teleost fishes plays an important role in producing functionally competent T-lymphocytes. However, the thymus in tilapia is not well known, which greatly hampers investigations into the immune responses of tilapia infected by aquatic pathogens. The histological structure and ultrastructure of the thymus in Oreochromis niloticus, including embryos and larvae at different developmental stages, juveniles, and adult fish, were systematically investigated using whole mount in situ hybridization (WISH), and light and transmission electron microscopy (TEM). The position of the thymus primordium was first labeled in the embryo at 2 days post-fertilization (dpf) with the thymus marker gene recombination activating gene 1 (Rag1), when the water temperature was 27 °C. Obvious structures of the thymus were easily observed in 4-dpf embryos. At this stage, the thymus was filled with stem cells. At 6 dpf, the thymus differentiated into the cortex and medulla. The shape of the thymus was 'broad bean'-like during the early stages from 4 to 10 dpf, and became wedge-shaped in fish larvae at 20 dpf. At 6 months post-fertilization (mpf), the thymus differentiated into the peripheral zone, central zone, and inner zone. During this stage, myoid cells and adipocytes appeared in the inner zone following thymus degeneration. Then, the thymus displayed more advanced degeneration by 1 year post-fertilization (ypf), and the separation of cortex and medulla was not observed at this stage. The thymic trabecula and lobule were absent during the entire course of development. However, the typical Hassall's corpuscle was present and underwent degeneration. Additionally, TEM showed that the thymic tissues contained a wide variety of cell types, namely lymphocytes, macrophages, epithelial cells, fibroblasts, and mastocytes.", "title": "" } ]
scidocsrr
0fdb6cc84617cf1b5ac8bb8c405dbb51
Natural Language Arguments: A Combined Approach
[ { "docid": "2a43e164e536600ee6ceaf6a9c1af1be", "text": "Unsupervised paraphrase acquisition has been an active research field in recent years, but its effective coverage and performance have rarely been evaluated. We propose a generic paraphrase-based approach for Relation Extraction (RE), aiming at a dual goal: obtaining an applicative evaluation scheme for paraphrase acquisition and obtaining a generic and largely unsupervised configuration for RE. We analyze the potential of our approach and evaluate an implemented prototype of it using an RE dataset. Our findings reveal a high potential for unsupervised paraphrase acquisition. We also identify the need for novel robust models for matching paraphrases in texts, which should address syntactic complexity and variability.", "title": "" }, { "docid": "74206eb5f85fd6ab0891c2a7fe9ffef8", "text": "This paper introduces the ArguMed-system. It is an example of a system for computer-mediated defeasible argumentation, a new trend in the field of defeasible argumentation. In this research, computer systems are developed that mediate the process of argumentation by one or more users. Argument-mediation systems should be contrasted with systems for automated reasoning: the latter perform reasoning tasks for users, while the former play the more passive role of a mediator. E.g., mediation systems keep track of the arguments raised and of the justification status of statements. The argumentation theory of the ArguMed-system is an adaptation of Verheij's CumulA-model, a procedural model of argumentation with arguments and counterarguments. In the CumulA-model, the defeat of arguments is determined by the structure of arguments and the attack relation between arguments. It is completely independent of the underlying language. The process-model is free, in the sense that it allows not only inference (i.e., 'forward' argumentation, drawing conclusions form premises), but also justification (i.e., 'backward' argumentation, adducing reasons for issues). The ArguMed-system has been designed in an attempt to enhance the familiarity of the interface and the transparency of the underlying argumentation theory of its precursor, the Argue!-system. The ArguMed-system's user interface is template-based, as is currently common in window-style user interfaces. The user gradually constructs arguments, by fill ing in templates that correspond to common argument patterns. An innovation of the ArguMed-system is that it uses dedicated templates for different types of argument moves. Whereas existing mediation systems are issue-based (in the style of Rittel's well-known Issue-Based Information System), the ArguMed-system allows free argumentation, as in the CumulA-model. In contrast with the CumulA-model, which has a very general notion of defeat, defeat in the ArguMed-system is only of Pollock's undercutter-type. The system allows three types of argument moves, viz. making a statement, adding a reason and its conclusion, and providing an (undercutter-type) exception blocking the connection between a reason and a conclusion. To put the ArguMed-system in context, it is compared with selected existing systems for argument mediation. The differences between the underlying argumentation theories and user interfaces are striking, which is suggested to be a symptom of the early stages of development of argument mediation systems. Given the lack of system evaluation by users in the field, the paper concludes with a discussion of the relevance of current research on computer-mediated defeasible argumentation. It is claimed that the shift of argument mediation systems from theoretical to practical tools is feasible, but can as yet not be made by system developers alone: a strong input from the research community is required.", "title": "" } ]
[ { "docid": "4716f812737e5ae082e30bab3fde16f9", "text": "Recently, electronic books (e-books) have become prevalent amongst the general population, as well as students, owing to their advantages over traditional books. In South Africa, a number of schools have integrated tablets into the classroom with the promise of replacing traditional books. In order to realise the potential of e-books and their associated devices within an academic context, where reading speed and comprehension are critical for academic performance and personal growth, the effectiveness of reading from a tablet screen should be evaluated. To achieve this objective, a quasi-experimental withinsubjects design was employed in order to compare the reading speed and comprehension performance of 68 students. The results of this study indicate the majority of participants read faster on an iPad, which is in contrast to previous studies that have found reading from tablets to be slower. It was also found that comprehension scores did not differ significantly between the two media. For students, these results provide evidence that tablets and e-books are suitable tools for reading and learning, and therefore, can be used for academic work. For educators, e-books can be introduced without concern that reading performance and comprehension will be hindered.", "title": "" }, { "docid": "5db1e7db73ae18802d04ed122ace42b0", "text": "Phishing is an online identity theft that aims to steal sensitive information such as username, password and online banking details from its victims. Phishing education needs to be considered as a means to combat this threat. This paper reports on a design and development of a mobile game prototype as an educational tool helping computer users to protect themselves against phishing attacks. The elements of a game design framework for avoiding phishing attacks were used to address the game design issues. Our mobile game design aimed to enhance the users' avoidance behaviour through motivation to protect themselves against phishing threats. A think-aloud study was conducted, along with a preand post-test, to assess the game design framework though the developed mobile game prototype. The study results showed a significant improvement of participants' phishing avoidance behaviour in their post-test assessment. Furthermore, the study findings suggest that participants' threat perception, safeguard effectiveness, self-efficacy, perceived severity and perceived susceptibility elements positively impact threat avoidance behaviour, whereas safeguard cost had a negative impact on it. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "912ea6c5267105610e8895e383a71e70", "text": "This paper provides a survey of work on the link between education and economic growth. It shows that data from the early 20th century are coherent with conclusions about education and economic growth derived from the much more recent past. It also presents an analysis of the role of education in facilitating the use of best-practice technology. It is to be published in the International Handbook on the Economics of Education edited by G and J. Johnes and published by Edward Elgar.", "title": "" }, { "docid": "1593fd6f9492adc851c709e3dd9b3c5f", "text": "This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material. We cast the problem as sequence tagging and introduce semi-supervised methods to a neural tagging model, which builds on recent advances in named entity recognition. Since annotated training data is scarce in this domain, we introduce a graph-based semi-supervised algorithm together with a data selection scheme to leverage unannotated articles. Both inductive and transductive semi-supervised learning strategies outperform state-of-the-art information extraction performance on the 2017 SemEval Task 10 ScienceIE task.", "title": "" }, { "docid": "ad076495666725ed3fd871c04d6b6794", "text": "Elite endurance athletes possess a high capacity for whole-body maximal fat oxidation (MFO). The aim was to investigate the determinants of a high MFO in endurance athletes. The hypotheses were that augmented MFO in endurance athletes is related to concomitantly increments of skeletal muscle mitochondrial volume density (MitoVD ) and mitochondrial fatty acid oxidation (FAOp ), that is, quantitative mitochondrial adaptations as well as intrinsic FAOp per mitochondria, that is, qualitative adaptations. Eight competitive male cross-country skiers and eight untrained controls were compared in the study. A graded exercise test was performed to determine MFO, the intensity where MFO occurs (FatMax ), and V ˙ O 2 Max . Skeletal muscle biopsies were obtained to determine MitoVD (electron microscopy), FAOp , and OXPHOSp (high-resolution respirometry). The following were higher (P < 0.05) in endurance athletes compared to controls: MFO (mean [95% confidence intervals]) (0.60 g/min [0.50-0.70] vs 0.32 [0.24-0.39]), FatMax (46% V ˙ O 2 Max [44-47] vs 35 [34-37]), V ˙ O 2 Max (71 mL/min/kg [69-72] vs 48 [47-49]), MitoVD (7.8% [7.2-8.5] vs 6.0 [5.3-6.8]), FAOp (34 pmol/s/mg muscle ww [27-40] vs 21 [17-25]), and OXPHOSp (108 pmol/s/mg muscle ww [104-112] vs 69 [68-71]). Intrinsic FAOp (4.0 pmol/s/mg muscle w.w/MitoVD [2.7-5.3] vs 3.3 [2.7-3.9]) and OXPHOSp (14 pmol/s/mg muscle ww/MitoVD [13-15] vs 11 [10-13]) were, however, similar in the endurance athletes and untrained controls. MFO and MitoVD correlated (r2  = 0.504, P < 0.05) in the endurance athletes. A strong correlation between MitoVD and MFO suggests that expansion of MitoVD might be rate-limiting for MFO in the endurance athletes. In contrast, intrinsic mitochondrial changes were not associated with augmented MFO.", "title": "" }, { "docid": "5536cc03e26fc3911f1019d2369c1cec", "text": "Monaural source separation is important for many real world applications. It is challenging because, with only a single channel of information available, without any constraints, an infinite number of solutions are possible. In this paper, we explore joint optimization of masking functions and deep recurrent neural networks for monaural source separation tasks, including speech separation, singing voice separation, and speech denoising. The joint optimization of the deep recurrent neural networks with an extra masking layer enforces a reconstruction constraint. Moreover, we explore a discriminative criterion for training neural networks to further enhance the separation performance. We evaluate the proposed system on the TSP, MIR-1K, and TIMIT datasets for speech separation, singing voice separation, and speech denoising tasks, respectively. Our approaches achieve 2.30-4.98 dB SDR gain compared to NMF models in the speech separation task, 2.30-2.48 dB GNSDR gain and 4.32-5.42 dB GSIR gain compared to existing models in the singing voice separation task, and outperform NMF and DNN baselines in the speech denoising task.", "title": "" }, { "docid": "5aab6cd36899f3d5e3c93cf166563a3e", "text": "Vein images generally appear darker with low contrast, which require contrast enhancement during preprocessing to design satisfactory hand vein recognition system. However, the modification introduced by contrast enhancement (CE) is reported to bring side effects through pixel intensity distribution adjustments. Furthermore, the inevitable results of fake vein generation or information loss occur and make nearly all vein recognition systems unconvinced. In this paper, a “CE-free” quality-specific vein recognition system is proposed, and three improvements are involved. First, a high-quality lab-vein capturing device is designed to solve the problem of low contrast from the view of hardware improvement. Then, a high quality lab-made database is established. Second, CFISH score, a fast and effective measurement for vein image quality evaluation, is proposed to obtain quality index of lab-made vein images. Then, unsupervised $K$ -means with optimized initialization and convergence condition is designed with the quality index to obtain the grouping results of the database, namely, low quality (LQ) and high quality (HQ). Finally, discriminative local binary pattern (DLBP) is adopted as the basis for feature extraction. For the HQ image, DLBP is adopted directly for feature extraction, and for the LQ one. CE_DLBP could be utilized for discriminative feature extraction for LQ images. Based on the lab-made database, rigorous experiments are conducted to demonstrate the effectiveness and feasibility of the proposed system. What is more, an additional experiment with PolyU database illustrates its generalization ability and robustness.", "title": "" }, { "docid": "224defa4906e121e42218f17c6efa4f2", "text": "This paper presents a particular model of heuristic search as a path-finding problem in a directed graph. A class of graph-searching procedures is described which uses a heuristic function to guide search. Heuristic functions are estimates of the number o f edges that remain to be traversed in reaching a goal node. A number of theoretical results for this model, and the intuition for these results, are presented. They relate the e])~ciency o f search to the accuracy o f the heuristic function. The results also explore efficiency as a consequence of the reliance or weight placed on the heuristics used.", "title": "" }, { "docid": "cfaeeb000232ade838ad751b7b404a66", "text": "Meyer has recently introduced an image decomposition model to split an image into two components: a geometrical component and a texture (oscillatory) component. Inspired by his work, numerical models have been developed to carry out the decomposition of gray scale images. In this paper, we propose a decomposition algorithm for color images. We introduce a generalization of Meyer s G norm to RGB vectorial color images, and use Chromaticity and Brightness color model with total variation minimization. We illustrate our approach with numerical examples. 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "69b631f179ea3c521f1dde75be537279", "text": "A conceptually simple but effective noise smoothing algorithm is described. This filter is motivated by the sigma probability of the Gaussian distribution, and it smooths the image noise by averaging only those neighborhood pixels which have the intensities within a fixed sigma range of the center pixel. Consequently, image edges are preserved, and subtle details and thin tines such as roads are retained. The characteristics of this smoothing algorithm are analyzed and compared with several other known filtering algorithms by their ability to retain subtle details, preserving edge shapes, sharpening ramp edges, etc. The comparison also indicates that the sigma filter is the most computationally efficient filter among those evaluated. The filter can be easily extended into several forms which can be used in contrast enhancement, image segmentation, and smoothing signal-dependent noisy images. Several test images 128 X 128 and 256 X 256 pixels in size are used to substantiate its characteristics. The algorithm can be easily extended to 3-D image smoothing.", "title": "" }, { "docid": "b863ab617b5c800fe570f579b2b12b11", "text": "Bourdieu and Education: How Useful is Bourdieu's Theory for Researchers?", "title": "" }, { "docid": "330de15c472bd403f2572f3bdcce2d52", "text": "Programmers repeatedly reuse code snippets. Retyping boilerplate code, and rediscovering how to correctly sequence API calls, programmers waste time. In this paper, we develop techniques that automatically synthesize code snippets upon a programmer’s request. Our approach is based on discovering snippets located in repositories; we mine repositories offline and suggest discovered snippets to programmers. Upon request, our synthesis procedure uses programmer’s current code to find the best fitting snippets, which are then presented to the programmer. The programmer can then either learn the proper API usage or integrate the synthesized snippets directly into her code. We call this approach interactive code snippet synthesis through repository mining. We show that this approach reduces the time spent developing code for 32% in our experiments.", "title": "" }, { "docid": "b6249dbd61928a0722e0bcbf18cd9f79", "text": "For many applications such as tele-operational robots and interactions with virtual environments, it is better to have performance with force feedback than without. Haptic devices are force reflecting interfaces. They can also track human hand positions simultaneously. A new 6 DOF (degree-of-freedom) haptic device was designed and calibrated in this study. It mainly contains a double parallel linkage, a rhombus linkage, a rotating mechanical structure and a grasping interface. Benefited from the unique design, it is a hybrid structure device with a large workspace and high output capability. Therefore, it is capable of multi-finger interactions. Moreover, with an adjustable base, operators can change different postures without interrupting haptic tasks. To investigate the performance regarding position tracking accuracy and static output forces, we conducted experiments on a three-dimensional electric sliding platform and a digital force gauge, respectively. Displacement errors and force errors are calculated and analyzed. To identify the capability and potential of the device, four application examples were programmed.", "title": "" }, { "docid": "ba2769abc859882f600e64cb14af2ac6", "text": "OBJECTIVE\nThis study measures and compares the outcome of conservative physical therapy with traction, by using magnetic resonance imaging and clinical parameters in patients presenting with low back pain caused by lumbar disc herniation.\n\n\nMETHODS\nA total of 26 patients with LDH (14F, 12M with mean aged 37 +/- 11) were enrolled in this study and 15 sessions (per day on 3 weeks) of physical therapy were applied. That included hot pack, ultrasound, electrotherapy and lumbar traction. Physical examination of the lumbar spine, severity of pain, sleeping order, patient and physician global assessment with visual analogue scale, functional disability by HAQ, Roland Disability Questionnaire, and Modified Oswestry Disability Questionnaire were assessed at baseline and at 4-6 weeks after treatment. Magnetic resonance imaging examinations were carried out before and 4-6 weeks after the treatment\n\n\nRESULTS\nAll patients completed the therapy session. There were significant reductions in pain, sleeping disturbances, patient and physician global assessment and disability scores, and significant increases in lumbar movements between baseline and follow-up periods. There were significant reductions of size of the herniated mass in five patients, and significant increase in 3 patients on magnetic resonance imaging after treatment, but no differences in other patients.\n\n\nCONCLUSIONS\nThis study showed that conventional physical therapies with lumbar traction were effective in the treatment of patient with subacute LDH. These results suggest that clinical improvement is not correlated with the finding of MRI. Patients with LDH should be monitored clinically (Fig. 3, Ref. 18).", "title": "" }, { "docid": "7a8babef15e8ed12d44dab68b5e17f6d", "text": "Hands-free terminals for speech communication employ adaptive filters to reduce echoes resulting from the acoustic coupling between loudspeaker and microphone. When using a personal computer with commercial audio hardware for teleconferencing, a sampling frequency offset between the loudspeaker output D/A converter and the microphone input A/D converter often occurs. In this case, state-of-the-art echo cancellation algorithms fail to track the correct room impulse response. In this paper, we present a novel least mean square (LMS-type) adaptive algorithm to estimate the frequency offset and resynchronize the signals using arbitrary sampling rate conversion. In conjunction with a normalized LMS-type adaptive filter for room impulse response tracking, the proposed system widely removes the deteriorating effects of a frequency offset up to several Hz and restores the functionality of echo cancellation.", "title": "" }, { "docid": "0c3fa5b92d95abb755f12dda030474c2", "text": "This paper examines the hypothesis that the persistence of low spatial and marital mobility in rural India, despite increased growth rates and rising inequality in recent years, is due to the existence of sub-caste networks that provide mutual insurance to their members. Unique panel data providing information on income, assets, gifts, loans, consumption, marriage, and migration are used to link caste networks to household and aggregate mobility. Our key finding, consistent with the hypothesis that local risk-sharing networks restrict mobility, is that among households with the same (permanent) income, those in higher-income caste networks are more likely to participate in caste-based insurance arrangements and are less likely to both out-marry and out-migrate. At the aggregate level, the networks appear to have coped successfully with the rising inequality within sub-castes that accompanied the Green Revolution. The results suggest that caste networks will continue to smooth consumption in rural India for the foreseeable future, as they have for centuries, unless alternative consumption-smoothing mechanisms of comparable quality become available. ∗We are very grateful to Andrew Foster for many useful discussions that substantially improved the paper. We received helpful comments from Jan Eeckhout, Rachel Kranton, Ethan Ligon and seminar participants at Arizona, Chicago, Essex, Georgetown, Harvard, IDEI, ITAM, LEA-INRA, LSE, Ohio State, UCLA, and NBER. Alaka Holla provided excellent research assistance. Research support from NICHD grant R01-HD046940 and NSF grant SES-0431827 is gratefully acknowledged. †Brown University and NBER ‡Yale University", "title": "" }, { "docid": "cdf141e5dfc1ce5573af8ac71ddca063", "text": "A vision-based row guidance method is presented to guide a robot platform which is designed independently to drive through the row crops in a field according to the design concept of open architecture. Then, the offset and heading angle of the robot platform are detected in real time to guide the platform on the basis of recognition of a crop row using machine vision. And the control scheme of the platform is proposed to carry out row guidance. Finally, the preliminary experiments of row guidance were implemented in a vegetable field. Experimental results show that algorithms of row identification and row guidance are effective according to the parameters measured and analyzed such as the heading angle and the offset for row guidance and the difference between the motion trajectory of the robot and the expected trajectory. And the accuracy of row guidance is up to ± 35mm, which means that the robot can move with a sufficiently high accuracy.", "title": "" }, { "docid": "0afb30eef5141d1d03996485088094a1", "text": "Microservices need to communicate via inter-service communication mechanism. This leads to service integration that requires the aspect of messaging and event handling to be considered during the specification of microservices. Therefore, the application of Enterprise Integration Patterns (EIP) in the context of the design of microservice architectures seems to be a promising approach for microservice architecture engineering. This paper presents a model-based approach for microservice architectural design and service integration through formal models using the UML and UML profiles. A domain model (bounded contexts) is used as a starting point. UML component diagrams are extended so that microservices can be modeled. Enterprise integration patterns are applied to the microservice diagrams providing a precise microservice specification that can be used for further simulations, transformations, or code generation tasks.", "title": "" }, { "docid": "7a0b0d314042bb753c8aa9da22e25a62", "text": "We present a new morphological analysis model that considers semantic plausibility of word sequences by using a recurrent neural network language model (RNNLM). In unsegmented languages, since language models are learned from automatically segmented texts and inevitably contain errors, it is not apparent that conventional language models contribute to morphological analysis. To solve this problem, we do not use language models based on raw word sequences but use a semantically generalized language model, RNNLM, in morphological analysis. In our experiments on two Japanese corpora, our proposed model significantly outperformed baseline models. This result indicates the effectiveness of RNNLM in morphological analysis.", "title": "" }, { "docid": "2dd16c00cccd76b2c4265131151c8cb5", "text": "This paper introduces the freely available WikEd Error Corpus. We describe the data mining process from Wikipedia revision histories, corpus content and format. The corpus consists of more than 12 million sentences with a total of 14 million edits of various types. As one possible application, we show that WikEd can be successfully adapted to improve a strong baseline in a task of grammatical error correction for English-as-a-Second-Language (ESL) learners’ writings by 2.63%. Used together with an ESL error corpus, a composed system gains 1.64% when compared to the ESL-trained system.", "title": "" } ]
scidocsrr
589e57538d3d59050745e15ea95a4ff1
A Systematic Approach for Attack Analysis and Mitigation in V2V Networks
[ { "docid": "a84143b7aa2d42f3297d81a036dc0f5e", "text": "Vehicular Ad hoc Networks (VANETs) have emerged recently as one of the most attractive topics for researchers and automotive industries due to their tremendous potential to improve traffic safety, efficiency and other added services. However, VANETs are themselves vulnerable against attacks that can directly lead to the corruption of networks and then possibly provoke big losses of time, money, and even lives. This paper presents a survey of VANETs attacks and solutions in carefully considering other similar works as well as updating new attacks and categorizing them into different classes.", "title": "" } ]
[ { "docid": "69a3676fad6416927cb59d818b999002", "text": "Good medical leadership is vital in delivering high-quality healthcare, and yet medical career progression has traditionally seen leadership lack credence in comparison with technical and academic ability. Individual standards have varied, leading to variations in the quality of medical leadership between different organisations and, on occasions, catastrophic lapses in the standard of care provided to patients. These high-profile events, plus increasing evidence linking clinical leadership to performance of units, has led recently to more focus on leadership development for all doctors, starting earlier and continuing throughout their careers. There is also an increased drive to see doctors take on more significant leadership roles throughout the healthcare system. The achievement of these aims will require doctors to develop strong personal and professional values, a range of non-technical skills that allow them to lead across professional boundaries, and an understanding of the increasingly complex environment in which 21st century healthcare is delivered. Developing these attributes will require dedicated resources and the sophisticated application of a variety of different learning methodologies such as mentoring, coaching, action learning and networking.", "title": "" }, { "docid": "1efe9405027ad67ccba8b18c3a28c6f0", "text": "To encourage strong passwords, system administrators employ password-composition policies, such as a traditional policy requiring that passwords have at least 8 characters from 4 character classes and pass a dictionary check. Recent research has suggested, however, that policies requiring longer passwords with fewer additional requirements can be more usable and in some cases more secure than this traditional policy. To explore long passwords in more detail, we conducted an online experiment with 8,143 participants. Using a cracking algorithm modified for longer passwords, we evaluate eight policies across a variety of metrics for strength and usability. Among the longer policies, we discover new evidence for a security/usability tradeoff, with none being strictly better than another on both dimensions. However, several policies are both more usable and more secure that the traditional policy we tested. Our analyses additionally reveal common patterns and strings found in cracked passwords. We discuss how system administrators can use these results to improve password-composition policies.", "title": "" }, { "docid": "33db7ac45c020d2a9e56227721b0be70", "text": "This thesis proposes an extended version of the Combinatory Categorial Grammar (CCG) formalism, with the following features: 1. grammars incorporate inheritance hierarchies of lexical types, defined over a simple, feature-based constraint language 2. CCG lexicons are, or at least can be, functions from forms to these lexical types This formalism, which I refer to as ‘inheritance-driven’ CCG (I-CCG), is conceptualised as a partially model-theoretic system, involving a distinction between category descriptions and their underlying category models, with these two notions being related by logical satisfaction. I argue that the I-CCG formalism retains all the advantages of both the core CCG framework and proposed generalisations involving such things as multiset categories, unary modalities or typed feature structures. In addition, I-CCG: 1. provides non-redundant lexicons for human languages 2. captures a range of well-known implicational word order universals in terms of an acquisition-based preference for shorter grammars This thesis proceeds as follows: Chapter 2 introduces the ‘baseline’ CCG formalism, which incorporates just the essential elements of category notation, without any of the proposed extensions. Chapter 3 reviews parts of the CCG literature dealing with linguistic competence in its most general sense, showing how the formalism predicts a number of language universals in terms of either its restricted generative capacity or the prioritisation of simpler lexicons. Chapter 4 analyses the first motivation for generalising the baseline category notation, demonstrating how certain fairly simple implicational word order universals are not formally predicted by baseline CCG, although they intuitively do involve considerations of grammatical economy. Chapter 5 examines the second motivation underlying many of the customised CCG category notations — to reduce lexical redundancy, thus allowing for the construction of lexicons which assign (each sense of) open class words and morphemes to no more than one lexical category, itself denoted by a non-composite lexical type.", "title": "" }, { "docid": "d652a2ffb4708b76d8fa70d7a452ae9f", "text": "If we are to achieve natural human–robot interaction, we may need to complement current vision and speech interfaces. Touch may provide us with an extra tool in this quest. In this paper we demonstrate the role of touch in interaction between a robot and a human. We show how infrared sensors located on robots can be easily used to detect and distinguish human interaction, in this case interaction with individual children. This application of infrared sensors potentially has many uses; for example, in entertainment or service robotics. This system could also benefit therapy or rehabilitation, where the observation and recording of movement and interaction is important. In the long term, this technique might enable robots to adapt to individuals or individual types of user. c © 2006 Published by Elsevier B.V.", "title": "" }, { "docid": "5e333f4620908dc643ceac8a07ff2a2d", "text": "Convolutional Neural Networks (CNNs) have reached outstanding results in several complex visual recognition tasks, such as classification and scene parsing. CNNs are composed of multiple filtering layers that perform 2D convolutions over input images. The intrinsic parallelism in such a computation kernel makes it suitable to be effectively accelerated on parallel hardware. In this paper we propose a highly flexible and scalable architectural template for acceleration of CNNs on FPGA devices, based on the cooperation between a set of software cores and a parallel convolution engine that communicate via a tightly coupled L1 shared scratchpad. Our accelerator structure, tested on a Xilinx Zynq XC-Z7045 device, delivers peak performance up to 80 GMAC/s, corresponding to 100 MMAC/s for each DSP slice in the programmable fabric. Thanks to the flexible architecture, convolution operations can be scheduled in order to reduce input/output bandwidth down to 8 bytes per cycle without degrading the performance of the accelerator in most of the meaningful use-cases.", "title": "" }, { "docid": "98c4f94eb35489d452cbd16c817e2bec", "text": "Many defect prediction techniques are proposed to improve software reliability. Change classification predicts defects at the change level, where a change is the modifications to one file in a commit. In this paper, we conduct the first study of applying change classification in practice.\n We identify two issues in the prediction process, both of which contribute to the low prediction performance. First, the data are imbalanced---there are much fewer buggy changes than clean changes. Second, the commonly used cross-validation approach is inappropriate for evaluating the performance of change classification. To address these challenges, we apply and adapt online change classification, resampling, and updatable classification techniques to improve the classification performance.\n We perform the improved change classification techniques on one proprietary and six open source projects. Our results show that these techniques improve the precision of change classification by 12.2-89.5% or 6.4--34.8 percentage points (pp.) on the seven projects. In addition, we integrate change classification in the development process of the proprietary project. We have learned the following lessons: 1) new solutions are needed to convince developers to use and believe prediction results, and prediction results need to be actionable, 2) new and improved classification algorithms are needed to explain the prediction results, and insensible and unactionable explanations need to be filtered or refined, and 3) new techniques are needed to improve the relatively low precision.", "title": "" }, { "docid": "a78913db9636369b2d7d8cb5e5a6a351", "text": "We propose a simple but strong baseline for time series classification from scratch with deep neural networks. Our proposed baseline models are pure end-to-end without any heavy preprocessing on the raw data or feature crafting. The proposed Fully Convolutional Network (FCN) achieves premium performance to other state-of-the-art approaches and our exploration of the very deep neural networks with the ResNet structure is also competitive. The global average pooling in our convolutional model enables the exploitation of the Class Activation Map (CAM) to find out the contributing region in the raw data for the specific labels. Our models provides a simple choice for the real world application and a good starting point for the future research. An overall analysis is provided to discuss the generalization capability of our models, learned features, network structures and the classification semantics.", "title": "" }, { "docid": "4453c85d0fc1513e9657731d84896864", "text": "A number of studies have looked at the prevalence rates of psychiatric disorders in the community in Pakistan over the last two decades. However, a very little information is available on psychiatric morbidity in primary health care. We therefore decided to measure prevalence of psychiatric disorders and their correlates among women from primary health care facilities in Lahore. We interviewed 650 women in primary health care settings in Lahore. We used a semi-structured interview and questionnaires to collect information during face-to-face interviews. Nearly two-third of the women (64.3%) in our study were diagnosed to have a psychiatric problem, while one-third (30.4%) suffered with Major Depressive Disorder. Stressful life events, verbal violence and battering were positively correlated with psychiatric morbidity and social support, using reasoning to resolve conflicts and education were negatively correlated with psychiatric morbidity. The prevalence of psychiatric disorders is in line with the prevalence figures found in community studies. Domestic violence is an important correlate which can be the focus of interventions.", "title": "" }, { "docid": "77aea5cc0a74546f5c8fef1dd39770bc", "text": "Road condition data are important in transportation management systems. Over the last decades, significant progress has been made and new approaches have been proposed for efficient collection of pavement condition data. However, the assessment of unpaved road conditions has been rarely addressed in transportation research. Unpaved roads constitute approximately 40% of the U.S. road network, and are the lifeline in rural areas. Thus, it is important for timely identification and rectification of deformation on such roads. This article introduces an innovative Unmanned Aerial Vehicle (UAV)-based digital imaging system focusing on efficient collection of surface condition data over rural roads. In contrast to other approaches, aerial assessment is proposed by exploring aerial imagery acquired from an unpiloted platform to derive a threedimensional (3D) surface model over a road distress area for distress measurement. The system consists of a lowcost model helicopter equipped with a digital camera, a Global Positioning System (GPS) receiver and an Inertial Navigation System (INS), and a geomagnetic sensor. A set of image processing algorithms has been developed for precise orientation of the acquired images, and generation of 3D road surface models and orthoimages, which allows for accurate measurement of the size and the dimension of the road surface distresses. The developed system has been tested over several test sites ∗To whom correspondence should be addressed. E-mail: chunsunz@ unimelb.edu.au. with roads of various surface distresses. The experiments show that the system is capable for providing 3D information of surface distresses for road condition assessment. Experiment results demonstrate that the system is very promising and provides high accuracy and reliable results. Evaluation of the system using 2D and 3D models with known dimensions shows that subcentimeter measurement accuracy is readily achieved. The comparison of the derived 3D information with the onsite manual measurements of the road distresses reveals differences of 0.50 cm, demonstrating the potential of the presented system for future practice.", "title": "" }, { "docid": "2d9473386a1838248cdb5dd919ea40e8", "text": "We present Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. Deep Voice lays the groundwork for truly end-to-end neural speech synthesis. The system comprises five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-tophoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For the segmentation model, we propose a novel way of performing phoneme boundary detection with deep neural networks using connectionist temporal classification (CTC) loss. For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original. By using a neural network for each component, our system is simpler and more flexible than traditional text-tospeech systems, where each component requires laborious feature engineering and extensive domain expertise. Finally, we show that inference with our system can be performed faster than real time and describe optimized WaveNet inference kernels on both CPU and GPU that achieve up to 400x speedups over existing implementations.", "title": "" }, { "docid": "514bf9c9105dd3de95c3965bb86ebe36", "text": "Origami is the centuries-old art of folding paper, and recently, it is investigated as computer science: Given an origami with creases, the problem to determine if it can be flat after folding all creases is NP-hard. Another hundreds-old art of folding paper is a pop-up book. A model for the pop-up book design problem is given, and its computational complexity is investigated. We show that both of the opening book problem and the closing book problem are NP-hard.", "title": "" }, { "docid": "a0ca6986d59905cea49ed28fa378c69e", "text": "The epidemic of type 2 diabetes and impaired glucose tolerance is one of the main causes of morbidity and mortality worldwide. In both disorders, tissues such as muscle, fat and liver become less responsive or resistant to insulin. This state is also linked to other common health problems, such as obesity, polycystic ovarian disease, hyperlipidaemia, hypertension and atherosclerosis. The pathophysiology of insulin resistance involves a complex network of signalling pathways, activated by the insulin receptor, which regulates intermediary metabolism and its organization in cells. But recent studies have shown that numerous other hormones and signalling events attenuate insulin action, and are important in type 2 diabetes.", "title": "" }, { "docid": "6fd84345b0399a0d59d80fb40829eee2", "text": "This paper describes a method based on a sequenceto-sequence learning (Seq2Seq) with attention and context preservation mechanism for voice conversion (VC) tasks. Seq2Seq has been outstanding at numerous tasks involving sequence modeling such as speech synthesis and recognition, machine translation, and image captioning. In contrast to current VC techniques, our method 1) stabilizes and accelerates the training procedure by considering guided attention and proposed context preservation losses, 2) allows not only spectral envelopes but also fundamental frequency contours and durations of speech to be converted, 3) requires no context information such as phoneme labels, and 4) requires no time-aligned source and target speech data in advance. In our experiment, the proposed VC framework can be trained in only one day, using only one GPU of an NVIDIA Tesla K80, while the quality of the synthesized speech is higher than that of speech converted by Gaussian mixture model-based VC and is comparable to that of speech generated by recurrent neural network-based text-to-speech synthesis, which can be regarded as an upper limit on VC performance.", "title": "" }, { "docid": "89792ba96d1e7d9d7bc1673469ed58cd", "text": "The purpose of this study was to examine the relationship between popular endurance field tests and physical match performance in elite male youth soccer players. Eighteen young male soccer players (age 14.4 ± 0.1 years, height 1.67 ± 4.8 cm, body mass 53.6 ± 1.8 kg) were randomly chosen among a population of elite-level soccer players. Players were observed during international championship games of the corresponding age categories and randomly submitted to the level 1 of the Yo-Yo intermittent recovery test (Yo-Yo IR1), the Multistage Fitness Test (MSFT), and the Hoff test on separate occasions. Physical and physiological match demands were assessed using Global Positioning System technology and short-range telemetry (GPS Elite, Canberra, Australia), respectively. Players covered 6,087 ± 582 m (5,098-7,019 m) of which 15% (930 ± 362 m; 442-1,513) were performed as a high-intensity activity. During the first and second halves, players attained 86.8 ± 6.5 and 85.8 ± 5.8% of maximum heart rate (HRmax; p = 0.17) with peak HRs of 100 ± 2 and 99.4 ± 3.2% of HRmax, respectively. Players' Yo-Yo IR1 and MSFT performance were significantly related (r = 0.62-0.76) to a number of match physical activities. However, the Hoff test was only significantly related with sprint distance (r = 0.70, p = 0.04). The Yo-Yo IR1 showed a very large association with MSFT performance (r = 0.89, p < 0.0001). The results of this study showed that the Yo-Yo IR1 and MSFT may be regarded as valuable tests to assess match fitness and subsequently guide training prescription in youth soccer players. The very strong relationship between Yo-Yo IR1 and MSFT suggests their use according to the period of the season and the aerobic fitness level of the players. Because of the association of the Yo-Yo IR1 and MSFT with match physical performances, these tests should be considered in talent selection and development of players.", "title": "" }, { "docid": "0fb04ad70b29ab50eae9d3dfc5407675", "text": "Weather forecasting is most challenging problem around the world. There are various reason because of its experimented values in meteorology, but it is also a typical unbiased time series forecasting problem in scientific research. A lots of methods proposed by various scientists. The motive behind research is to predict more accurate. This paper contribute the same using artificial neural network (ANN) and simulated in MATLAB to predict two important weather parameters i.e. maximum and minimum temperature. The model has been trained using past 60 years of real data collected from(1901-1960) and tested over 40 years to forecast maximum and minimum temperature. The results based on mean square error function (MSE) confirm, this model which is based on multilayer perceptron has the potential to successful application to weather forecasting.", "title": "" }, { "docid": "80de9b0ba596c19bfc8a99fd46201a99", "text": "We integrate the recently proposed spatial transformer network (SPN) ( Jaderberg & Simonyan , 2015) into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNNSPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2.9% for a convolutional networks and 2.0% for convolutional networks with SPN layers. The SPN outputs a zoomed, rotated and skewed version of the input image. We investigate different down-sampling factors (ratio of pixel in input and output) for the SPN and show that the RNN-SPN model is able to down-sample the input images without deteriorating performance. The down-sampling in RNN-SPN can be thought of as adaptive downsampling that minimizes the information loss in the regions of interest. We attribute the superior performance of the RNN-SPN to the fact that it can attend to a sequence of regions of interest.", "title": "" }, { "docid": "66ba9c32c29e905a018aab3a25733fd1", "text": "Information environments have the power to affect people's perceptions and behaviors. In this paper, we present the results of studies in which we characterize the gender bias present in image search results for a variety of occupations. We experimentally evaluate the effects of bias in image search results on the images people choose to represent those careers and on people's perceptions of the prevalence of men and women in each occupation. We find evidence for both stereotype exaggeration and systematic underrepresentation of women in search results. We also find that people rate search results higher when they are consistent with stereotypes for a career, and shifting the representation of gender in image search results can shift people's perceptions about real-world distributions. We also discuss tensions between desires for high-quality results and broader societal goals for equality of representation in this space.", "title": "" }, { "docid": "10d69148c3a419e4ffe3bf1ca4c7c9d7", "text": "Discovering object classes from images in a fully unsupervised way is an intrinsically ambiguous task; saliency detection approaches however ease the burden on unsupervised learning. We develop an algorithm for simultaneously localizing objects and discovering object classes via bottom-up (saliency-guided) multiple class learning (bMCL), and make the following contributions: (1) saliency detection is adopted to convert unsupervised learning into multiple instance learning, formulated as bottom-up multiple class learning (bMCL); (2) we utilize the Discriminative EM (DiscEM) to solve our bMCL problem and show DiscEM's connection to the MIL-Boost method[34]; (3) localizing objects, discovering object classes, and training object detectors are performed simultaneously in an integrated framework; (4) significant improvements over the existing methods for multi-class object discovery are observed. In addition, we show single class localization as a special case in our bMCL framework and we also demonstrate the advantage of bMCL over purely data-driven saliency methods.", "title": "" }, { "docid": "efba71635ca38b4588d3e4200d655fee", "text": "BACKGROUND\nCircumcisions and cesarian sections are common procedures. Although complications to the newborn child fortunately are rare, it is important to emphasize the potential significance of this problem and its frequent iatrogenic etiology. The authors present 7 cases of genitourinary trauma in newborns, including surgical management and follow-up.\n\n\nMETHODS\nThe authors relate 7 recent cases of genitourinary trauma in newborns from a children's hospital in a major metropolitan area.\n\n\nRESULTS\nCase 1 and 2: Two infants suffered degloving injuries to both the prepuce and penile shaft from a Gomco clamp. Successful full-thickness skin grafting using the previously excised foreskin was used in 1 child. Case 3, 4, and 5: A Mogen clamp caused glans injuries in 3 infants. In 2, hemorrhage from the severed glans was controlled with topical epinephrine; the glans healed with a flattened appearance. Another infant sustained a laceration ventrally, requiring a delayed modified meatal advancement glanoplasty to correct the injury. Case 6: A male infant suffered a ventral slit and division of the ventral urethra before placement of a Gomco clamp. Formal hypospadias repair was required. Case 7: An emergent cesarean section resulted in a grade 4-perineal laceration in a female infant. The vaginal tear caused by the surgeon's finger, extended up to the posterior insertion of the cervix and into the rectum. The infant successfully underwent an emergent multilayered repair.\n\n\nCONCLUSIONS\nGenitourinary trauma in the newborn is rare but often necessitates significant surgical intervention. Circumcision often is the causative event. There has been only 1 prior report of a perineal injury similar to case 7, with a fatal outcome.", "title": "" }, { "docid": "0477f74fce5684f3fa4630a14a3b8bae", "text": "Central fatigue during exercise is the decrease in muscle force attributable to a decline in motoneuronal output. Several methods have been used to assess central fatigue; however, some are limited or not sensitive enough to detect failure in central drive. Central fatigue develops during many forms of exercise. A number of mechanisms may contribute to its development including an increased inhibition mediated by group III and IV muscle afferents along with a decrease in muscle spindle facilitation. In some situations, motor cortical output is shown to be suboptimal. A specific terminology for central fatigue is included.", "title": "" } ]
scidocsrr
1517db59a31b235a7a32c46df6943d79
Virtual AoA and AoD estimation for sparse millimeter wave MIMO channels
[ { "docid": "14fb6228827657ba6f8d35d169ad3c63", "text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.", "title": "" }, { "docid": "c62f8d08b45a16eb26a45e47a65e69b9", "text": "In this paper, we propose a feasible beamforming (BF) scheme realized in media access control (MAC) layer following the guidelines of the IEEE 802.15.3c criteria for millimeterwave 60GHz wireless personal area networks (60GHz WPANs). The proposed BF targets to minimize the BF set-up time and mitigates the high path loss of 60GHz WPAN systems. It is based on designed multi-resolution codebooks, which generate three kinds of patterns of different half power beam widths (HPBWs): quasi-omni pattern, sector and beam. These three kinds of patterns are employed in the three stages of the BF protocol, namely device-to-device (DEV-to-DEV) linking, sectorlevel searching and beam-level searching. All the three stages can be completed within one superframe, which minimizes the potential interference to other systems during BF set-up period. In this paper, we show some example codebooks and provide the details of BF procedure. Simulation results show that the setup time of the proposed BF protocol is as small as 2% when compared to the exhaustive searching protocol. The proposed BF is a complete design, it re-uses commands specified in IEEE 802.15.3c, completely compliant to the standard; It has thus been adopted by IEEE 802.15.3c as an optional functionality to realize Giga-bit-per-second (Gbps) communication in WPAN Systems.", "title": "" } ]
[ { "docid": "c0d7ba264ca5b8a4effeca047f416763", "text": "We propose a novel dependency-based hybrid tree model for semantic parsing, which converts natural language utterance into machine interpretable meaning representations. Unlike previous state-of-the-art models, the semantic information is interpreted as the latent dependency between the natural language words in our joint representation. Such dependency information can capture the interactions between the semantics and natural language words. We integrate a neural component into our model and propose an efficient dynamicprogramming algorithm to perform tractable inference. Through extensive experiments on the standard multilingual GeoQuery dataset with eight languages, we demonstrate that our proposed approach is able to achieve state-ofthe-art performance across several languages. Analysis also justifies the effectiveness of using our new dependency-based representation.1", "title": "" }, { "docid": "6ee26f725bfb63a6ff72069e48404e68", "text": "OBJECTIVE\nTo determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival.\n\n\nPATIENTS AND METHODS\nThis was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A \"FIT Treadmill Score\" was then derived from the β coefficients of the model with the highest survival discrimination.\n\n\nRESULTS\nThe median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811).\n\n\nCONCLUSION\nThe FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations.", "title": "" }, { "docid": "bb64d33190d359461a4258e0ed3d3229", "text": "In this paper, we consider the class of first-order algebraic ordinary differential equations (AODEs), and study their rational solutions in three different approaches. A combinatorial approach gives a degree bound for rational solutions of a class of AODEs which do not have movable poles. Algebraic considerations yield an algorithm for computing rational solutions of quasilinear AODEs. And finally ideas from algebraic geometry combine these results to an algroithm for finding all rational solutions of a class of firstorder AODEs which covers all examples from the collection of Kamke. In particular, parametrizations of algebraic curves play an important role for a transformation of a parametrizable first-order AODE to a quasi-linear differential equation.", "title": "" }, { "docid": "205c1939369c6cc80838f562a57156a5", "text": "This paper examines the role of the human driver as the primary control element within the traditional driver-vehicle system. Lateral and longitudinal control tasks such as path-following, obstacle avoidance, and headway control are examples of steering and braking activities performed by the human driver. Physical limitations as well as various attributes that make the human driver unique and help to characterize human control behavior are described. Example driver models containing such traits and that are commonly used to predict the performance of the combined driver-vehicle system in lateral and longitudinal control tasks are identified.", "title": "" }, { "docid": "47caefb6e3228160c75f3ae1746248b8", "text": "A new resistively loaded vee dipole (RVD) is designed and implemented for ultrawide-band short-pulse ground-penetrating radar (GPR) applications. The new RVD is improved in terms of voltage standing wave ratio, gain, and front-to-back ratio while maintaining many advantages of the typical RVD, such as the ability to radiate a short-pulse into a small spot on the ground, a low radar cross section, applicability in an array, etc. The improvements are achieved by curving the arms and modifying the Wu-King loading profile. The curve and the loading profile are designed to decrease the reflection at the drive point of the antenna while increasing the forward gain. The new RVD is manufactured by printing the curved arms on a thin Kapton film and loading them with chip resistors, which approximate the continuous loading profile. The number of resistors is chosen such that the resonant frequency due to the resistor spacing occurs at a frequency higher than the operation bandwidth. The antenna and balun are made in a module by sandwiching them between two blocks of polystyrene foam, attaching a plastic support, and encasing the foam blocks in heat-sealable plastic. The antenna module is mechanically reliable without significant performance degradation. The use of the new RVD module in a GPR system is also demonstrated with an experiment.", "title": "" }, { "docid": "1389323613225897330d250e9349867b", "text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .", "title": "" }, { "docid": "53371fac3b92afe5bc6c51dccd95fc4b", "text": "Multi-frequency electrical impedance tomography (EIT) systems require stable voltage controlled current generators that will work over a wide frequency range and with a large variation in load impedance. In this paper we compare the performance of two commonly used designs: the first is a modified Howland circuit whilst the second is based on a current mirror. The output current and the output impedance of both circuits were determined through PSPICE simulation and through measurement. Both circuits were stable over the frequency ranges 1 kHz to 1 MHz. The maximum variation of output current with frequency for the modified Howland circuit was 2.0% and for the circuit based on a current mirror 1.6%. The output impedance for both circuits was greater than 100 kohms for frequencies up to 100 kHz. However, neither circuit achieved this output impedance at 1 MHz. Comparing the results from the two circuits suggests that there is little to choose between them in terms of a practical implementation.", "title": "" }, { "docid": "0e521af53f9faf4fee38843a22ec2185", "text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.", "title": "" }, { "docid": "db53a67d449cb36053422c5dbc07f8de", "text": "We propose CAVIA, a meta-learning method for fast adaptation that is scalable, flexible, and easy to implement. CAVIA partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks. At test time, the context parameters are updated with one or several gradient steps on a task-specific loss that is backpropagated through the shared part of the network. Compared to approaches that adjust all parameters on a new task (e.g., MAML), CAVIA can be scaled up to larger networks without overfitting on a single task, is easier to implement, and is more robust to the inner-loop learning rate. We show empirically that CAVIA outperforms MAML on regression, classification, and reinforcement learning problems.", "title": "" }, { "docid": "05eb1af3e6838640b6dc5c1c128cc78a", "text": "Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we discuss implications for Natural Language Generation (NLG) systems and future directions.", "title": "" }, { "docid": "e4493c56867bfe62b7a96b33fb171fad", "text": "In the field of agricultural information, the automatic identification and diagnosis of maize leaf diseases is highly desired. To improve the identification accuracy of maize leaf diseases and reduce the number of network parameters, the improved GoogLeNet and Cifar10 models based on deep learning are proposed for leaf disease recognition in this paper. Two improved models that are used to train and test nine kinds of maize leaf images are obtained by adjusting the parameters, changing the pooling combinations, adding dropout operations and rectified linear unit functions, and reducing the number of classifiers. In addition, the number of parameters of the improved models is significantly smaller than that of the VGG and AlexNet structures. During the recognition of eight kinds of maize leaf diseases, the GoogLeNet model achieves a top - 1 average identification accuracy of 98.9%, and the Cifar10 model achieves an average accuracy of 98.8%. The improved methods are possibly improved the accuracy of maize leaf disease, and reduced the convergence iterations, which can effectively improve the model training and recognition efficiency.", "title": "" }, { "docid": "837b9d2834b72c7d917203457aafa421", "text": "The strongly nonlinear magnetic characteristic of Switched Reluctance Motors (SRMs) makes their torque control a challenging task. In contrast to standard current-based control schemes, we use Model Predictive Control (MPC) and directly manipulate the switches of the dc-link power converter. At each sampling time a constrained finite-time optimal control problem based on a discrete-time nonlinear prediction model is solved yielding a receding horizon control strategy. The control objective is torque regulation while winding currents and converter switching frequency are minimized. Simulations demonstrate that a good closed-loop performance is achieved already for short prediction horizons indicating the high potential of MPC in the control of SRMs.", "title": "" }, { "docid": "ec8684e227bf63ac2314ce3cb17e2e8b", "text": "Musical genre classification is the automatic classification of audio signals into user defined labels describing pieces of music. A problem inherent to genre classification experiments in music information retrieval research is the use of songs from the same artist in both training and test sets. We show that this does not only lead to overoptimistic accuracy results but also selectively favours particular classification approaches. The advantage of using models of songs rather than models of genres vanishes when applying an artist filter. The same holds true for the use of spectral features versus fluctuation patterns for preprocessing of the audio files.", "title": "" }, { "docid": "52bee48854d8eaca3b119eb71d79c22d", "text": "In this paper, we present a new combined approach for feature extraction, classification, and context modeling in an iterative framework based on random decision trees and a huge amount of features. A major focus of this paper is to integrate different kinds of feature types like color, geometric context, and auto context features in a joint, flexible and fast manner. Furthermore, we perform an in-depth analysis of multiple feature extraction methods and different feature types. Extensive experiments are performed on challenging facade recognition datasets, where we show that our approach significantly outperforms previous approaches with a performance gain of more than 15% on the most difficult dataset.", "title": "" }, { "docid": "99d1c93150dfc1795970323ec5bb418e", "text": "People can refer to quantities in a visual scene by using either exact cardinals (e.g. one, two, three) or natural language quantifiers (e.g. few, most, all). In humans, these two processes underlie fairly different cognitive and neural mechanisms. Inspired by this evidence, the present study proposes two models for learning the objective meaning of cardinals and quantifiers from visual scenes containing multiple objects. We show that a model capitalizing on a ‘fuzzy’ measure of similarity is effective for learning quantifiers, whereas the learning of exact cardinals is better accomplished when information about number is provided.", "title": "" }, { "docid": "424a0f5f4a725b85fabb8c7ee19c6e3c", "text": "The data on dental variability in natural populations of sibling species of common voles (“arvalis” group, genus Microtus) from European and Asian parts of the species’ ranges are summarized using a morphotype-based approach to analysis of dentition. Frequency distributions of the first lower (m1) and the third upper (M3) molar morphotypes are analyzed in about 65 samples of M. rossiaemeridionalis and M. arvalis represented by arvalis and obscurus karyotypic forms. Because of extreme similarity of morphotype dental patterns in the taxa studied, it is impossible to use molar morphotype frequencies for species identification. However, a morphotype-based approach to analysis of dental variability does allow analysis of inter-species comparisons from an evolutionary standpoint. Three patterns of dental complexity are established in the taxa studied: simple, basic (the most typical within the ranges of both species), and complex. In M. rossiaemeridionalis and in M. arvalis obscurus only the basic pattern of dentition occurs. In M. arvalis arvalis, both simple and basic dental patterns are found. Analysis of association of morphotype dental patterns with geographical and environmental variables reveals an increase in the number of complex molars with longitude and latitude: in M. arvalis the pattern of molar complication is more strongly related to longitude, and in M. rossiaemeridionalis—to latitude. Significant decrease in incidence of simple molars with climate continentality and increasing aridity is found in M. arvalis. The simple pattern of dentition is found in M. arvalis arvalis in Spain, along the Atlantic coast of France and on islands thereabout, in northeastern Germany and Kirov region in European Russia. Hypotheses to explain the distribution of populations with different dental patterns within the range of M. arvalis sensu stricto are discussed.", "title": "" }, { "docid": "8b224de0808d3ed64445d8e1d7a1a5b8", "text": "ASCIMER (Assessing Smart Cities in the Mediterranean Region) is a project developed by the Universidad Politecnica of Madrid (UPM) for the EIBURS call on “Smart City Development: Applying European and International Experience to the Mediterranean Region”. Nowadays, many initiatives aimed at analysing the conception process, deployment methods or outcomes of the -referred asSmart City projects are being developed in multiple fields. Since its conception, the Smart City notion has evolved from the execution of specific projects to the implementation of global strategies to tackle wider city challenges. ASCIMER ́s project takes as a departure point that any kind of Smart City assessment should give response to the real challenges that cities of the 21st century are facing. It provides a comprehensive overview of the available possibilities and relates them to the specific city challenges. A selection of Smart City initiatives will be presented in order to establish relations between the identified city challenges and real Smart Projects designed to solve them. As a result of the project, a Projects Guide has been developed as a tool for the implementation of Smart City projects that efficiently respond to complex and diverse urban challenges without compromising their sustainable development and while improving the quality of life of their citizens.", "title": "" }, { "docid": "087b1951ec35db6de6f4739404277913", "text": "A possible scenario for the evolution of Television Broadcast is the adoption of 8 K resolution video broadcasting. To achieve the required bit-rates MIMO technologies are an actual candidate. In this scenario, this paper collected electric field levels from a MIMO experimental system for TV broadcasting to tune the parameters of the ITU-R P.1546 propagation model, which has been employed to model VHF and UHF broadcast channels. The parameters are tuned for each polarization alone and for both together. This is done considering multiple reception points and also a larger capturing time interval for a fixed reception site. Significant improvements on the match between the actual and measured link budget are provided by the optimized parameters.", "title": "" }, { "docid": "e6cbd8d32233e7e683b63a5a1a0e91f8", "text": "Background:Quality of life is an important end point in clinical trials, yet there are few quality of life questionnaires for neuroendocrine tumours.Methods:This international multicentre validation study assesses the QLQ-GINET21 Quality of Life Questionnaire in 253 patients with gastrointestinal neuroendocrine tumours. All patients were requested to complete two quality of life questionnaires – the EORTC Core Quality of Life questionnaire (QLQ-C30) and the QLQ-GINET21 – at baseline, and at 3 and 6 months post-baseline; the psychometric properties of the questionnaire were then analysed.Results:Analysis of QLQ-GINET21 scales confirmed appropriate aggregation of the items, except for treatment-related symptoms, where weight gain showed low correlation with other questions in the scale; weight gain was therefore analysed as a single item. Internal consistency of scales using Cronbach’s α coefficient was >0.7 for all parts of the QLQ-GINET21 at 6 months. Intraclass correlation was >0.85 for all scales. Discriminant validity was confirmed, with values <0.70 for all scales compared with each other.Scores changed in accordance with alterations in performance status and in response to expected clinical changes after therapies. Mean scores were similar for pancreatic and other tumours.Conclusion:The QLQ-GINET21 is a valid and responsive tool for assessing quality of life in the gut, pancreas and liver neuroendocrine tumours.", "title": "" }, { "docid": "1f364472fcf7da9bfc18d9bb8a521693", "text": "The Cre/lox system is widely used in mice to achieve cell-type-specific gene expression. However, a strong and universally responding system to express genes under Cre control is still lacking. We have generated a set of Cre reporter mice with strong, ubiquitous expression of fluorescent proteins of different spectra. The robust native fluorescence of these reporters enables direct visualization of fine dendritic structures and axonal projections of the labeled neurons, which is useful in mapping neuronal circuitry, imaging and tracking specific cell populations in vivo. Using these reporters and a high-throughput in situ hybridization platform, we are systematically profiling Cre-directed gene expression throughout the mouse brain in several Cre-driver lines, including new Cre lines targeting different cell types in the cortex. Our expression data are displayed in a public online database to help researchers assess the utility of various Cre-driver lines for cell-type-specific genetic manipulation.", "title": "" } ]
scidocsrr
21759bb66b76c9b2efcb430066410547
A PWM LLC Type Resonant Converter Adapted to Wide Output Range in PEV Charging Applications
[ { "docid": "f273a4bcebfe41e9b3ebb3027c67c27d", "text": "This paper reviews the current status and implementation of battery chargers, charging power levels, and infrastructure for plug-in electric vehicles and hybrids. Charger systems are categorized into off-board and on-board types with unidirectional or bidirectional power flow. Unidirectional charging limits hardware requirements and simplifies interconnection issues. Bidirectional charging supports battery energy injection back to the grid. Typical on-board chargers restrict power because of weight, space, and cost constraints. They can be integrated with the electric drive to avoid these problems. The availability of charging infrastructure reduces on-board energy storage requirements and costs. On-board charger systems can be conductive or inductive. An off-board charger can be designed for high charging rates and is less constrained by size and weight. Level 1 (convenience), Level 2 (primary), and Level 3 (fast) power levels are discussed. Future aspects such as roadbed charging are presented. Various power level chargers and infrastructure configurations are presented, compared, and evaluated based on amount of power, charging time and location, cost, equipment, and other factors.", "title": "" }, { "docid": "fd8f4206ae749136806a35c0fe1597c7", "text": "In this paper, an inductor-inductor-capacitor (LLC) resonant dc-dc converter design procedure for an onboard lithium-ion battery charger of a plug-in hybrid electric vehicle (PHEV) is presented. Unlike traditional resistive load applications, the characteristic of a battery load is nonlinear and highly related to the charging profiles. Based on the features of an LLC converter and the characteristics of the charging profiles, the design considerations are studied thoroughly. The worst-case conditions for primary-side zero-voltage switching (ZVS) operation are analytically identified based on fundamental harmonic approximation when a constant maximum power (CMP) charging profile is implemented. Then, the worst-case operating point is used as the design targeted point to ensure soft-switching operation globally. To avoid the inaccuracy of fundamental harmonic approximation approach in the below-resonance region, the design constraints are derived based on a specific operation mode analysis. Finally, a step-by-step design methodology is proposed and validated through experiments on a prototype converting 400 V from the input to an output voltage range of 250-450 V at 3.3 kW with a peak efficiency of 98.2%.", "title": "" } ]
[ { "docid": "139b3dae4713a5bcff97e1b209bd3206", "text": "Utilizing parametric and nonparametric techniques, we assess the role of a heretofore relatively unexplored ‘input’ in the educational process, homework, on academic achievement. Our results indicate that homework is an important determinant of student test scores. Relative to more standard spending related measures, extra homework has a larger and more significant impact on test scores. However, the effects are not uniform across different subpopulations. Specifically, we find additional homework to be most effective for high and low achievers, which is further confirmed by stochastic dominance analysis. Moreover, the parametric estimates of the educational production function overstate the impact of schooling related inputs. In all estimates, the homework coefficient from the parametric model maps to the upper deciles of the nonparametric coefficient distribution and as a by-product the parametric model understates the percentage of students with negative responses to additional homework. JEL: C14, I21, I28", "title": "" }, { "docid": "4e0108df18154d4d7d90203ad7ba2156", "text": "Multi-stage programming languages provide a convenient notation for explicitly staging programs. Staging a definitional interpreter for a domain specific language is one way of deriving an implementation that is both readable and efficient. In an untyped setting, staging an interpreter \"removes a complete layer of interpretive overhead\", just like partial evaluation. In a typed setting however, Hindley-Milner type systems do not allow us to exploit typing information in the language being interpreted. In practice, this can mean a slowdown cost by a factor of three or mor.Previously, both type specialization and tag elimination were applied to this problem. In this paper we propose an alternative approach, namely, expressing the definitional interpreter in a dependently typed programming language. We report on our experience with the issues that arise in writing such an interpreter and in designing such a language. .To demonstrate the soundness of combining staging and dependent types in a general sense, we formalize our language (called Meta-D) and prove its type safety. To formalize Meta-D, we extend Shao, Saha, Trifonov and Papaspyrou's λH language to a multi-level setting. Building on λH allows us to demonstrate type safety in a setting where the type language contains all the calculus of inductive constructions, but without having to repeat the work needed for establishing the soundness of that system.", "title": "" }, { "docid": "04953f3a55a77b9a35e7cea663c6387e", "text": "-This paper presents a calibration procedure for a fish-eye lens (a high-distortion lens) mounted on a CCD TV camera. The method is designed to account for the differences in images acquired via a distortion-free lens camera setup and the images obtained by a fish-eye lens camera. The calibration procedure essentially defines a mapping between points in the world coordinate system and their corresponding point locations in the image plane. This step is important for applications in computer vision which involve quantitative measurements. The objective of this mapping is to estimate the internal parameters of the camera, including the effective focal length, one-pixel width on the image plane, image distortion center, and distortion coefficients. The number of parameters to be calibrated is reduced by using a calibration pattern with equally spaced dots and assuming a pin-hole model camera behavior for the image center, thus assuming negligible distortion at the image distortion center. Our method employs a non-finear transformation between points in the world coordinate system and their corresponding location on the image plane. A Lagrangian minimization method is used to determine the coefficients of the transformation. The validity and effectiveness of our calibration and distortion correction procedure are confirmed by application of this procedure on real images. Copyright © 1996 Pattern Recognition Society. Published by Elsevier Science Ltd. Camera calibration Lens distortion Intrinsic camera parameters Fish-eye lens Optimization", "title": "" }, { "docid": "43e630794f1bce27688d2cedbb19f17d", "text": "The systematic maintenance of mining machinery and equipment is the crucial factor for the proper functioning of a mine without production process interruption. For high-quality maintenance of the technical systems in mining, it is necessary to conduct a thorough analysis of machinery and accompanying elements in order to determine the critical elements in the system which are prone to failures. The risk assessment of the failures of system parts leads to obtaining precise indicators of failures which are also excellent guidelines for maintenance services. This paper presents a model of the risk assessment of technical systems failure based on the fuzzy sets theory, fuzzy logic and min–max composition. The risk indicators, severity, occurrence and detectability are analyzed. The risk indicators are given as linguistic variables. The model presented was applied for assessing the risk level of belt conveyor elements failure which works in severe conditions in a coal mine. Moreover, this paper shows the advantages of this model when compared to a standard procedure of RPN calculating – in the FMEA method of risk", "title": "" }, { "docid": "6981940f7994fdfa1d216a3191cc6ad1", "text": "How good security at the NSA could have stopped him.", "title": "" }, { "docid": "da4c868b35a235e25b96448337f07a0b", "text": "In last few decades, human activity recognition grabbed considerable research attentions from a wide range of pattern recognition and human-computer interaction researchers due to its prominent applications such as smart home health care. For instance, activity recognition systems can be adopted in a smart home health care system to improve their rehabilitation processes of patients. There are various ways of using different sensors for human activity recognition in a smartly controlled environment. Among which, physical human activity recognition through wearable sensors provides valuable information about an individual’s degree of functional ability and lifestyle. In this paper, we present a smartphone inertial sensors-based approach for human activity recognition. Efficient features are first extracted from raw data. The features include mean, median, autoregressive coefficients, etc. The features are further processed by a kernel principal component analysis (KPCA) and linear discriminant analysis (LDA) to make them more robust. Finally, the features are trained with a Deep Belief Network (DBN) for successful activity recognition. The proposed approach was compared with traditional expression recognition approaches such as typical multiclass Support Vector Machine (SVM) and Artificial Neural Network (ANN) where it outperformed them. Keywords— Activity Recognition, Sensors, Smartphones, Deep Belief Network.", "title": "" }, { "docid": "f8947be81285e037eef69c5d2fcb94fb", "text": "To build a flexible and an adaptable architecture network supporting variety of services and their respective requirements, 5G NORMA introduced a network of functions based architecture breaking the major design principles followed in the current network of entities based architecture. This revolution exploits the advantages of the new technologies like Software-Defined Networking (SDN) and Network Function Virtualization (NFV) in conjunction with the network slicing and multitenancy concepts. In this paper we focus on the concept of Software Defined for Mobile Network Control (SDM-C) network: its definition, its role in controlling the intra network slices resources, its specificity to be QoE aware thanks to the QoE/QoS monitoring and modeling component and its complementarity with the orchestration component called SDM-O. To operate multiple network slices on the same infrastructure efficiently through controlling resources and network functions sharing among instantiated network slices, a common entity named SDM-X is introduced. The proposed design brings a set of new capabilities to make the network energy efficient, a feature that is discussed through some use cases.", "title": "" }, { "docid": "de39f498f28cf8cfc01f851ca3582d32", "text": "Program autotuning has been shown to achieve better or more portable performance in a number of domains. However, autotuners themselves are rarely portable between projects, for a number of reasons: using a domain-informed search space representation is critical to achieving good results; search spaces can be intractably large and require advanced machine learning techniques; and the landscape of search spaces can vary greatly between different problems, sometimes requiring domain specific search techniques to explore efficiently.\n This paper introduces OpenTuner, a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests. We demonstrate the efficacy and generality of OpenTuner by building autotuners for 7 distinct projects and 16 total benchmarks, showing speedups over prior techniques of these projects of up to 2.8x with little programmer effort.", "title": "" }, { "docid": "dd40063dd10027f827a65976261c8683", "text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.", "title": "" }, { "docid": "bb5e00ac09e12f3cdb097c8d6cfde9a9", "text": "3D biomaterial printing has emerged as a potentially revolutionary technology, promising to transform both research and medical therapeutics. Although there has been recent progress in the field, on-demand fabrication of functional and transplantable tissues and organs is still a distant reality. To advance to this point, there are two major technical challenges that must be overcome. The first is expanding upon the limited variety of available 3D printable biomaterials (biomaterial inks), which currently do not adequately represent the physical, chemical, and biological complexity and diversity of tissues and organs within the human body. Newly developed biomaterial inks and the resulting 3D printed constructs must meet numerous interdependent requirements, including those that lead to optimal printing, structural, and biological outcomes. The second challenge is developing and implementing comprehensive biomaterial ink and printed structure characterization combined with in vitro and in vivo tissueand organ-specific evaluation. This perspective outlines considerations for addressing these technical hurdles that, once overcome, will facilitate rapid advancement of 3D biomaterial printing as an indispensable tool for both investigating complex tissue and organ morphogenesis and for developing functional devices for a variety of diagnostic and regenerative medicine applications. PAPER 5 Contributed equally to this work. REcEivEd", "title": "" }, { "docid": "c7f1e26d27c87bfa0da637c28dbcdeda", "text": "There has recently been an increased interest in named entity recognition and disambiguation systems at major conferences such as WWW, SIGIR, ACL, KDD, etc. However, most work has focused on algorithms and evaluations, leaving little space for implementation details. In this paper, we discuss some implementation and data processing challenges we encountered while developing a new multilingual version of DBpedia Spotlight that is faster, more accurate and easier to configure. We compare our solution to the previous system, considering time performance, space requirements and accuracy in the context of the Dutch and English languages. Additionally, we report results for 9 additional languages among the largest Wikipedias. Finally, we present challenges and experiences to foment the discussion with other developers interested in recognition and disambiguation of entities in natural language text.", "title": "" }, { "docid": "9b470feac9ae4edd11b87921934c9fc2", "text": "Cutaneous melanoma may in some instances be confused with seborrheic keratosis, which is a very common neoplasia, more often mistaken for actinic keratosis and verruca vulgaris. Melanoma may clinically resemble seborrheic keratosis and should be considered as its possible clinical simulator. We report a case of melanoma with dermatoscopic characteristics of seborrheic keratosis and emphasize the importance of the dermatoscopy algorithm in differentiating between a melanocytic and a non-melanocytic lesion, of the excisional biopsy for the establishment of the diagnosis of cutaneous tumors, and of the histopathologic examination in all surgically removed samples.", "title": "" }, { "docid": "b3c83fc9495387f286ea83d00673b5b3", "text": "A new walk compensation method for a pulsed time-of-flight rangefinder is suggested. The receiver channel operates without gain control using leading edge timing discrimination principle. The generated walk error is compensated for by measuring the pulse length and knowing the relation between the walk error and pulse length. The walk compensation is possible also at the range where the signal is clipped and where the compensation method by amplitude measurement is impossible. Based on the simulations walk error can be compensated within the dynamic range of 1:30 000.", "title": "" }, { "docid": "436a250dc621d58d70bee13fd3595f06", "text": "The solid-state transformer allows add-on intelligence to enhance power quality compatibility between source and load. It is desired to demonstrate the benefits gained by the use of such a device. Recent advancement in semiconductor devices and converter topologies facilitated a newly proposed intelligent universal transformer (IUT), which can isolate a disturbance from either source or load. This paper describes the basic circuit and the operating principle for the multilevel converter based IUT and its applications for medium voltages. Various power quality enhancement features are demonstrated with computer simulation for a complete IUT circuit.", "title": "" }, { "docid": "6afaf6c8059d9a8b4201af3ab0e9c2ba", "text": "\" Memory is made up of a number of interrelated systems, organized structures of operating components consisting of neural substrates and their behavioral and cognitive correlates. A ternary classificatory scheme of memory is proposed in which procedural, semantic, and episodic memory constitute a \"monohierarchical\" arrangement: Episodic memory is a specialized subsystem of semantic memory, and semantic memory is a specialized subsystem of procedural memory. The three memory systems differ from one another in a number of ways, including the kind of consciousness that characterizes their operations. The ternary scheme overlaps with dichotomies and trichotomies of memory proposed by others. Evidence for multiple systems is derived from many sources. Illustrative data are provided by experiments in which direct priming effects are found to be both functionally and stochastically independent of recognition memory. Solving puzzles in science has much in common with solving puzzles for amusement, but the two differ in important respects. Consider, for instance, the jigsaw puzzle that scientific activity frequently imitates. The everyday version of the puzzle is determinate: It consists of a target picture and jigsaw pieces that, when properly assembled, are guaranteed to match the picture. Scientific puzzles are indeterminate: The number of pieces required to complete a picture is unpredictable; a particular piece may fit many pictures or none; it may fit only one picture, but the picture itself may be unknown; or the hypothetical picture may be imagined, but its component pieces may remain undiscovered. This article is about a current puzzle in the science of memory. It entails an imaginary picture and a search for pieces that fit it. The picture, or the hypothesis, depicts memory as consisting of a number of systems, each system serving somewhat different purposes and operating according to somewhat different principles. Together they form the marvelous capacity that we call by the single name of memory, the capacity that permits organisms to benefit from their past experiences. Such a picture is at variance with conventional wisdom that holds memory to be essentially a single system, the idea that \"memory is memory.\" The article consists of three main sections. In the first, 1 present some pretheoretical reasons for hypothesizing the existence of multiple memory systems and briefly discuss the concept of memory system. In the second, I describe a ternary classificatory scheme of memory--consisting of procedural, semantic, and episodic memory--and briefly compare this scheme with those proposed by others. In the third, I discuss the nature and logic of evidence for multiple systems and describe some experiments that have yielded data revealing independent effects of one and the same act of learning, effects seemingly at variance with the idea of a single system. I answer the question posed in the title of the article in the short concluding section. P r e t h e o r e t i c a l C o n s i d e r a t i o n s Why Multiple Memory Systems? It is possible to identify several a priori reasons why we should break with long tradition (Tulving, 1984a) and entertain thoughts about multiple memory systems. I mention five here. The first reason in many ways is perhaps the most compelling: No profound generalizations can be made about memory as a whole, but general statements about particular kinds of memory are perfectly possible. Thus, many questionable claims about memory in the literature, claims that give rise to needless and futile arguments, would become noncontroversial if their domain was restricted to parts of memory. Second, memory, like everything else in our world, has become what it is through a very long evolutionary process. Such a process seldom forms a continuous smooth line, but is characterized by sudden twists, jumps, shifts, and turns. One might expect, therefore, that the brain structures and mechanisms that (together with their behavioral and mental correlates) go to make up memory will also reflect such evolutionary quirks (Oakley, 1983). April 1985 • American Psychologist Copyright 1985 by the American Psychological Association, Inc. 0003-066X/85/$00.75 Vol. 40, No. 4, 385-398 385 The third reason is suggested by comparisons with other psychological functions. Consider, for instance, the interesting phenomenon of blindsight: People with damage to the visual cortex are blind in a part of their visual field in that they do not see objects in that part, yet they can accurately point to and discriminate these objects in a forced-choice situation (e.g., Weiskrantz, 1980; Weiskrantz, Warrington, Sanders, & Marshall, 1974). Such facts imply that different brain mechanisms exist for picking up information about the visual environment. Or consider the massive evidence for the existence of two separate cortical pathways involved in vision, one mediating recognition of objects, the other their location in space (e.g., Mishkin, Ungerleider, & Macko, 1983; Ungerleider & Mishkin, 1982). I f \"seeing\" things--something that phenomenal experience tells us is clearly uni tary-i s subserved by separable neural-cognitive systems, it is possible that learning and remembering, too, appear to be unitary only because of the absence of contrary evidence. The fourth general reason derives from what I think is an unassailable assumption that most, if not all, of our currently held ideas and theories about mental processes are wrong and that sooner or later in the future they will be replaced with more adequate concepts, concepts that fit nature better (Tulving, 1979). Our task, therefore, should be to hasten the arrival of such a future. Among other things, we should be willing to contemplate the possibility that the \"memoryi s -memory\" view is wrong and look for a better alternative. The fifth reason lies in a kind of failure of imagination: It is difficult to think how varieties of learning and memory that appear to be so different on inspection can reflect the workings of one and the same underlying set of structures and processes. It is difficult to imagine, for instance, that perceptualEditor's note. This article is based on a Distinguished Scientific Contribution Award address presented at the meeting of the American Psychological Association, Toronto, Canada, August 26, 1984. Award addresses, submitted by award recipients, are published as received except for minor editorial changes designed to maintain American Psychologist format. This reflects a policy of recognizing distinguished award recipients by eliminating the usual editorial review process to provide a forum consistent with that employed in delivering the award address. Author's note. This work was supported by the Natural Sciences and Engineering Research Council of Canada (Grant No. A8632) and by a Special Research Program Grant from the Connaught Fund, University of Toronto. I would like to thank Fergus-Craik and Daniel Schacter for their comments on the article and Janine Law for help with library research and the preparation of the manuscript. Requests for reprints should be sent to Endel Tulving, Department of Psychology, University of Toronto, Toronto, Canada, M5S IA1. motor adaptations to distorting lenses and their aftereffects (e.g., Kohler, 1962) are mediated by the same memory system that enables an individual to answer affirmatively when asked whether Abraham Lincoln is dead. It is equally difficult to imagine that the improved ability to make visual acuity judgments, resulting from many sessions of practice without reinforcement or feedback (e.g., Tulving, 1958), has much in common with a person's ability to remember the funeral of a close friend. If we reflect on the limits of generalizations about memory, think about the twists and turns of evolution, examine possible analogies with other biological and psychological systems, believe that most current ideas we have about the human mind are wrong, and have great difficulty apprehending sameness in different varieties of learning and memory, we might be ready to imagine the possibility that memory consists of a number of interrelated systems. But what exactly do we mean by a memory", "title": "" }, { "docid": "436900539406faa9ff34c1af12b6348d", "text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.", "title": "" }, { "docid": "d3795f971fe0eeb89f1019bcceea8ae2", "text": "This meta-analytic review of 148 studies on child and adolescent direct and indirect aggression examined the magnitude of gender differences, intercorrelations between forms, and associations with maladjustment. Results confirmed prior findings of gender differences (favoring boys) in direct aggression and trivial gender differences in indirect aggression. Results also indicated a substantial intercorrelation (r = .76) between these forms. Despite this high intercorrelation, the 2 forms showed unique associations with maladjustment: Direct aggression is more strongly related to externalizing problems, poor peer relations, and low prosocial behavior, and indirect aggression is related to internalizing problems and higher prosocial behavior. Moderation of these effect sizes by method of assessment, age, gender, and several additional variables were systematically investigated.", "title": "" }, { "docid": "c36cf93863266df18f7514efae35e6aa", "text": "Why are people attracted to humanoid robots and androids? The answer is simple: because human beings are attuned to understand or interpret human expressions and behaviors, especially those that exist in their surroundings. As they grow, infants, who are supposedly born with the ability to discriminate various types of stimuli, gradually adapt and fine-tune their interpretations of detailed social clues from other voices, languages, facial expressions, or behaviors (Pascalis et al., 2002). Perhaps due to this functionality of nature and nurture, people have a strong tendency to anthropomorphize nearly everything they encounter. This is also true for computers or robots. In other words, when we see PCs or robots, some automatic process starts running inside us that tries to interpret them as human. The media equation theory (Reeves & Nass, 1996) first explicitly articulated this tendency within us. Since then, researchers have been pursuing the key element to make people feel more comfortable with computers or creating an easier and more intuitive interface to various information devices. This pursuit has also begun spreading in the field of robotics. Recently, researcher’s interests in robotics are shifting from traditional studies on navigation and manipulation to human-robot interaction. A number of researches have investigated how people respond to robot behaviors and how robots should behave so that people can easily understand them (Fong et al., 2003; Breazeal, 2004; Kanda et al., 2004). Many insights from developmental or cognitive psychologies have been implemented and examined to see how they affect the human response or whether they help robots produce smooth and natural communication with humans. However, human-robot interaction studies have been neglecting one issue: the \"appearance versus behavior problem.\" We empirically know that appearance, one of the most significant elements in communication, is a crucial factor in the evaluation of interaction (See Figure 1). The interactive robots developed so far had very mechanical outcomes that do appear as “robots.” Researchers tried to make such interactive robots “humanoid” by equipping them with heads, eyes, or hands so that their appearance more closely resembled human beings and to enable them to make such analogous human movements or gestures as staring, pointing, and so on. Functionality was considered the primary concern in improving communication with humans. In this manner, many studies have compared robots with different behaviors. Thus far, scant attention has been paid to robot appearances. Although", "title": "" }, { "docid": "6414893702d8f332f5a7767fd3811395", "text": "Differential privacy has become the dominant standard in the research community for strong privacy protection. There has been a flood of research into query answering algorithms that meet this standard. Algorithms are becoming increasingly complex, and in particular, the performance of many emerging algorithms is data dependent, meaning the distribution of the noise added to query answers may change depending on the input data. Theoretical analysis typically only considers the worst case, making empirical study of average case performance increasingly important. In this paper we propose a set of evaluation principles which we argue are essential for sound evaluation. Based on these principles we propose DPBench, a novel evaluation framework for standardized evaluation of privacy algorithms. We then apply our benchmark to evaluate algorithms for answering 1- and 2-dimensional range queries. The result is a thorough empirical study of 15 published algorithms on a total of 27 datasets that offers new insights into algorithm behavior---in particular the influence of dataset scale and shape---and a more complete characterization of the state of the art. Our methodology is able to resolve inconsistencies in prior empirical studies and place algorithm performance in context through comparison to simple baselines. Finally, we pose open research questions which we hope will guide future algorithm design.", "title": "" } ]
scidocsrr
d5cee6a9c8273585f02c612dee8d2b17
What is beautiful is usable
[ { "docid": "d380a5de56265c80309733370c612316", "text": "Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard \"outcome\" debriefing. \"Process\" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed.", "title": "" } ]
[ { "docid": "c39b143861d1e0c371ec1684bb29f4cc", "text": "Data races are a particularly unpleasant kind of threading bugs. They are hard to find and reproduce -- you may not observe a bug during the entire testing cycle and will only see it in production as rare unexplainable failures. This paper presents ThreadSanitizer -- a dynamic detector of data races. We describe the hybrid algorithm (based on happens-before and locksets) used in the detector. We introduce what we call dynamic annotations -- a sort of race detection API that allows a user to inform the detector about any tricky synchronization in the user program. Various practical aspects of using ThreadSanitizer for testing multithreaded C++ code at Google are also discussed.", "title": "" }, { "docid": "be7b6112f147213511a3c433337c2da7", "text": "We assessed the physical and chemical stability of docetaxel infusion solutions. Stability of the antineoplastic drug was determined 1.) after reconstitution of the injection concentrate and 2.) after further dilution in two commonly used vehicle‐solutions, 0.9% sodium chloride and 5% dextrose, in PVC bags and polyolefine containers. Chemical stability was measured by using a stability‐indicating HPLC assay with ultraviolet detection. Physical stability was determined by visual inspection. The stability tests revealed that reconstituted docetaxel solutions (= premix solutions) are physico‐chemically stable (at a level ≥ 95% docetaxel) for a minimum of four weeks, independent of the storage temperature (refrigerated, room temperature). Diluted infusion solutions (docetaxel concentration 0.3 mg/ml and 0.9 mg/ml), with either vehicle‐solution, proved physico‐chemically stable (at a level ≥ 95% docetaxel) for a minimum of four weeks, when prepared in polyolefine containers and stored at room temperature. However, diluted infusion solutions exhibited limited physical stability in PVC bags, because docetaxel precipitation occured irregularly, though not before day 5 of storage. In addition, time‐dependent DEHP‐leaching from PVC infusion bags by docetaxel infusion solutions must be considered.", "title": "" }, { "docid": "a1317e75e1616b2922e5df02f69076d9", "text": "Fixed-length embeddings of words are very useful for a variety of tasks in speech and language processing. Here we systematically explore two methods of computing fixed-length embeddings for variable-length sequences. We evaluate their susceptibility to phonetic and speaker-specific variability on English, a high resource language, and Xitsonga, a low resource language, using two evaluation metrics: ABX word discrimination and ROC-AUC on same-different phoneme n-grams. We show that a simple downsampling method supplemented with length information can be competitive with the variable-length input feature representation on both evaluations. Recurrent autoencoders trained without supervision can yield even better results at the expense of increased computational complexity.", "title": "" }, { "docid": "d935679ba64755efc915cdfd4178f995", "text": "A dual-band passive radio frequency identification (RFID) tag antenna applicable for a recessed cavity in metallic objects such as heavy equipment, vehicles, aircraft, and containers with long read range is proposed by using an artificial magnetic conductor (AMC) ground plane. The proposed tag antenna consists of a bowtie antenna and a recessed cavity with the AMC ground plane installed on the bottom side of the cavity. The AMC ground plane is utilized to provide dual-band operation at European (869.5 869.7 MHz) and Korean (910 914 MHz) passive UHF RFID bands by replacing the bottom side of the metallic cavity of a PEC-like behavior and, therefore, changing the reflection phase of the ground plane. It is worthwhile to mention that the European and the Korean UHF RFID bands are allocated very closely, and the frequency separation ratio between the two bands is just about 0.045, which is very small. It is demonstrated by experiment that the maximum reading distance of the proposed tag antenna with optimized dimensions can be improved more than 3.1 times at the two RFID bands compared to a commercial RFID tag.", "title": "" }, { "docid": "f55c7479777d1b5c2265369d69c5f789", "text": "In an object-oriented program, a unit test often consists of a sequence of method calls that create and mutate objects, then use them as arguments to a method under test. It is challenging to automatically generate sequences that are legal and behaviorally-diverse, that is, reaching as many different program states as possible.\n This paper proposes a combined static and dynamic automated test generation approach to address these problems, for code without a formal specification. Our approach first uses dynamic analysis to infer a call sequence model from a sample execution, then uses static analysis to identify method dependence relations based on the fields they may read or write. Finally, both the dynamically-inferred model (which tends to be accurate but incomplete) and the statically-identified dependence information (which tends to be conservative) guide a random test generator to create legal and behaviorally-diverse tests.\n Our Palus tool implements this testing approach. We compared its effectiveness with a pure random approach, a dynamic-random approach (without a static phase), and a static-random approach (without a dynamic phase) on several popular open-source Java programs. Tests generated by Palus achieved higher structural coverage and found more bugs.\n Palus is also internally used in Google. It has found 22 previously-unknown bugs in four well-tested Google products.", "title": "" }, { "docid": "1997b8a0cac1b3beecfd79b3e206d7e4", "text": "Scatterplots are well established means of visualizing discrete data values with two data variables as a collection of discrete points. We aim at generalizing the concept of scatterplots to the visualization of spatially continuous input data by a continuous and dense plot. An example of a continuous input field is data defined on an n-D spatial grid with respective interpolation or reconstruction of in-between values. We propose a rigorous, accurate, and generic mathematical model of continuous scatterplots that considers an arbitrary density defined on an input field on an n-D domain and that maps this density to m-D scatterplots. Special cases are derived from this generic model and discussed in detail: scatterplots where the n-D spatial domain and the m-D data attribute domain have identical dimension, 1-D scatterplots as a way to define continuous histograms, and 2-D scatterplots of data on 3-D spatial grids. We show how continuous histograms are related to traditional discrete histograms and to the histograms of isosurface statistics. Based on the mathematical model of continuous scatterplots, respective visualization algorithms are derived, in particular for 2-D scatterplots of data from 3-D tetrahedral grids. For several visualization tasks, we show the applicability of continuous scatterplots. Since continuous scatterplots do not only sample data at grid points but interpolate data values within cells, a dense and complete visualization of the data set is achieved that scales well with increasing data set size. Especially for irregular grids with varying cell size, improved results are obtained when compared to conventional scatterplots. Therefore, continuous scatterplots are a suitable extension of a statistics visualization technique to be applied to typical data from scientific computation.", "title": "" }, { "docid": "d7d66f89e5f5f2d6507e0939933b3a17", "text": "The discarded clam shell waste, fossil and edible oil as biolubricant feedstocks create environmental impacts and food chain dilemma, thus this work aims to circumvent these issues by using activated saltwater clam shell waste (SCSW) as solid catalyst for conversion of Jatropha curcas oil as non-edible sources to ester biolubricant. The characterization of solid catalyst was done by Differential Thermal Analysis-Thermo Gravimetric Analysis (DTATGA), X-Ray Fluorescence (XRF), X-Ray Diffraction (XRD), Brunauer-Emmett-Teller (BET), Field Emission Scanning Electron Microscopy (FESEM) and Fourier Transformed Infrared Spectroscopy (FTIR) analysis. The calcined catalyst was used in the transesterification of Jatropha oil to methyl ester as the first step, and the second stage was involved the reaction of Jatropha methyl ester (JME) with trimethylolpropane (TMP) based on the various process parameters. The formated biolubricant was analyzed using the capillary column (DB-5HT) equipped Gas Chromatography (GC). The conversion results of Jatropha oil to ester biolubricant can be found nearly 96.66%, and the maximum distribution composition mainly contains 72.3% of triester (TE). Keywords—Conversion, ester biolubricant, Jatropha curcas oil, solid catalyst.", "title": "" }, { "docid": "72e1a2bf37495439a12a53f4b842c218", "text": "A new transmission model of human malaria in a partially immune population with three discrete delays is formulated for variable host and vector populations. These are latent period in the host population, latent period in the vector population and duration of partial immunity. The results of our mathematical analysis indicate that a threshold parameterR0 exists. ForR0 > 1, the expected number of mosquitoes infected from humansRhm should be greater than a certain critical valueR∗hm or should be less thanR∗hm whenR ∗ hm > 1, for a stable endemic equilibrium to exist. We deduce from model analysis that an increase in the period within which partial immunity is lost increases the spread of the disease. Numerically we deduce that treatment of the partially immune humans assists in reducing the severity of the disease and that transmission blocking vaccines would be effective in a partially immune population. Numerical simulations support our analytical conclusions and illustrate possible behaviour scenarios of the model. c © 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ed4050c6934a5a26fc377fea3eefa3bc", "text": "This paper presents the design of the permanent magnetic system for the wall climbing robot with permanent magnetic tracks. A proposed wall climbing robot with permanent magnetic adhesion mechanism for inspecting the oil tanks is briefly put forward, including the mechanical system architecture. The permanent magnetic adhesion mechanism and the tracked locomotion mechanism are employed in the robot system. By static and dynamic force analysis of the robot, design parameters about adhesion mechanism are derived. Two types of the structures of the permanent magnetic units are given in the paper. The analysis of those two types of structure is also detailed. Finally, two wall climbing robots equipped with those two different magnetic systems are discussed and the experiments are included in the paper.", "title": "" }, { "docid": "aa2e16e6ed5d2610a567e358807834d4", "text": "As the most prevailing two-factor authentication mechanism, smart-card-based password authentication has been a subject of intensive research in the past two decades, and hundreds of this type of schemes have wave upon wave been proposed. In most of these studies, there is no comprehensive and systematical metric available for schemes to be assessed objectively, and the authors present new schemes with assertions of the superior aspects over previous ones, while overlooking dimensions on which their schemes fare poorly. Unsurprisingly, most of them are far from satisfactory—either are found short of important security goals or lack of critical properties, especially being stuck with the security-usability tension. To overcome this issue, in this work we first explicitly define a security model that can accurately capture the practical capabilities of an adversary and then suggest a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum. As our main contribution, a new scheme is advanced to resolve the various issues arising from user corruption and server compromise, and it is formally proved secure under the harshest adversary model so far. In particular, by integrating “honeywords”, traditionally the purview of system security, with a “fuzzy-verifier”, our scheme hits “two birds”: it not only eliminates the long-standing security-usability conflict that is considered intractable in the literature, but also achieves security guarantees beyond the conventional optimal security bound.", "title": "" }, { "docid": "11c3b4c63bb9cdc19f542bb477cca191", "text": "Although there are many motion planning techniques, there is no single one that performs optimally in every environment for every movable object. Rather, each technique has different strengths and weaknesses which makes it best-suited for particular types of situations. Also, since a given environment can consist of vastly different regions, there may not even be a single planner that is well suited for the problem. Ideally, one would use a suite of planners in concert to solve the problem by applying the best-suited planner in each region. In this paper, we propose an automated framework for feature-sensitive motion planning. We use a machine learning approach to characterize and partition C-space into (possibly overlapping) regions that are well suited to one of the planners in our library of roadmap-based motion planning methods. After the best-suited method is applied in each region, their resulting roadmaps are combined to form a roadmap of the entire planning space. We demonstrate on a range of problems that our proposed feature-sensitive approach achieves results superior to those obtainable by any of the individual planners on their own. “A Machine Learning Approach for ...”, Morales et al. TR04-001, Parasol Lab, Texas A&M, February 2004 1", "title": "" }, { "docid": "dc76c7e939d26a6a81a8eb891b5824b7", "text": "While deeper and wider neural networks are actively pushing the performance limits of various computer vision and machine learning tasks, they often require large sets of labeled data for effective training and suffer from extremely high computational complexity. In this paper, we will develop a new framework for training deep neural networks on datasets with limited labeled samples using cross-network knowledge projection which is able to improve the network performance while reducing the overall computational complexity significantly. Specifically, a large pre-trained teacher network is used to observe samples from the training data. A projection matrix is learned to project this teacher-level knowledge and its visual representations from an intermediate layer of the teacher network to an intermediate layer of a thinner and faster student network to guide and regulate its training process. Both the intermediate layers from the teacher network and the injection layers from the student network are adaptively selected during training by evaluating a joint loss function in an iterative manner. This knowledge projection framework allows us to use crucial knowledge learned by large networks to guide the training of thinner student networks, avoiding over-fitting, achieving better network performance, and significantly reducing the complexity. Extensive experimental results on benchmark datasets have demonstrated that our proposed knowledge projection approach outperforms existing methods, improving accuracy by up to 4% while reducing network complexity by 4 to 10 times, which is very attractive for practical applications of deep neural networks.", "title": "" }, { "docid": "a27a05cb00d350f9021b5c4f609d772c", "text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.", "title": "" }, { "docid": "b9ca1209ce50bf527d68109dbdf7431c", "text": "The MATLAB model of the analog multiplier based on the sigma delta modulation is developed. Different modes of multiplier are investigated and obtained results are compared with analytical results.", "title": "" }, { "docid": "b4abfa56d69919d264ed9ccb9a8cd2c7", "text": "Electronic commerce (e-commerce) continues to have a profound impact on the global business environment, but technologies and applications also have begun to focus more on mobile computing, the wireless Web, and mobile commerce. Against this backdrop, mobile banking (m-banking) has emerged as an important distribution channel, with considerable research devoted to its adoption. However, this research stream has lacked a clear roadmap or agenda. Therefore, the present article analyzes and synthesizes existing studies of m-banking adoption and maps the major theories that researchers have used to predict consumer intentions to adopt it. The findings indicate that the m-banking adoption literature is fragmented, though it commonly relies on the technology acceptance model and its modifications, revealing that compatibility (with lifestyle and device), perceived usefulness, and attitude are the most significant drivers of intentions to adopt m-banking services in developed and developing countries. Moreover, the extant literature appears limited by its narrow focus on SMS banking in developing countries; virtually no studies address the use of m-banking applications via smartphones or tablets or consider the consequences of such usage. This study makes several recommendations for continued research in the area of mobile banking.", "title": "" }, { "docid": "0d1193978e4f8be0b78c6184d7ece3fe", "text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …", "title": "" }, { "docid": "f4e15eb37843ff4e2938b1b69ab88cb3", "text": "Static analysis tools are often used by software developers to entail early detection of potential faults, vulnerabilities, code smells, or to assess the source code adherence to coding standards and guidelines. Also, their adoption within Continuous Integration (CI) pipelines has been advocated by researchers and practitioners. This paper studies the usage of static analysis tools in 20 Java open source projects hosted on GitHub and using Travis CI as continuous integration infrastructure. Specifically, we investigate (i) which tools are being used and how they are configured for the CI, (ii) what types of issues make the build fail or raise warnings, and (iii) whether, how, and after how long are broken builds and warnings resolved. Results indicate that in the analyzed projects build breakages due to static analysis tools are mainly related to adherence to coding standards, and there is also some attention to missing licenses. Build failures related to tools identifying potential bugs or vulnerabilities occur less frequently, and in some cases such tools are activated in a \"softer\" mode, without making the build fail. Also, the study reveals that build breakages due to static analysis tools are quickly fixed by actually solving the problem, rather than by disabling the warning, and are often properly documented.", "title": "" }, { "docid": "47db0fdd482014068538a00f7dc826a9", "text": "Importance\nThe use of palliative care programs and the number of trials assessing their effectiveness have increased.\n\n\nObjective\nTo determine the association of palliative care with quality of life (QOL), symptom burden, survival, and other outcomes for people with life-limiting illness and for their caregivers.\n\n\nData Sources\nMEDLINE, EMBASE, CINAHL, and Cochrane CENTRAL to July 2016.\n\n\nStudy Selection\nRandomized clinical trials of palliative care interventions in adults with life-limiting illness.\n\n\nData Extraction and Synthesis\nTwo reviewers independently extracted data. Narrative synthesis was conducted for all trials. Quality of life, symptom burden, and survival were analyzed using random-effects meta-analysis, with estimates of QOL translated to units of the Functional Assessment of Chronic Illness Therapy-palliative care scale (FACIT-Pal) instrument (range, 0-184 [worst-best]; minimal clinically important difference [MCID], 9 points); and symptom burden translated to the Edmonton Symptom Assessment Scale (ESAS) (range, 0-90 [best-worst]; MCID, 5.7 points).\n\n\nMain Outcomes and Measures\nQuality of life, symptom burden, survival, mood, advance care planning, site of death, health care satisfaction, resource utilization, and health care expenditures.\n\n\nResults\nForty-three RCTs provided data on 12 731 patients (mean age, 67 years) and 2479 caregivers. Thirty-five trials used usual care as the control, and 14 took place in the ambulatory setting. In the meta-analysis, palliative care was associated with statistically and clinically significant improvements in patient QOL at the 1- to 3-month follow-up (standardized mean difference, 0.46; 95% CI, 0.08 to 0.83; FACIT-Pal mean difference, 11.36] and symptom burden at the 1- to 3-month follow-up (standardized mean difference, -0.66; 95% CI, -1.25 to -0.07; ESAS mean difference, -10.30). When analyses were limited to trials at low risk of bias (n = 5), the association between palliative care and QOL was attenuated but remained statistically significant (standardized mean difference, 0.20; 95% CI, 0.06 to 0.34; FACIT-Pal mean difference, 4.94), whereas the association with symptom burden was not statistically significant (standardized mean difference, -0.21; 95% CI, -0.42 to 0.00; ESAS mean difference, -3.28). There was no association between palliative care and survival (hazard ratio, 0.90; 95% CI, 0.69 to 1.17). Palliative care was associated consistently with improvements in advance care planning, patient and caregiver satisfaction, and lower health care utilization. Evidence of associations with other outcomes was mixed.\n\n\nConclusions and Relevance\nIn this meta-analysis, palliative care interventions were associated with improvements in patient QOL and symptom burden. Findings for caregiver outcomes were inconsistent. However, many associations were no longer significant when limited to trials at low risk of bias, and there was no significant association between palliative care and survival.", "title": "" }, { "docid": "cb4adbfa09f4ad217fe1efa9541ab5ab", "text": "This paper presents an efficient implementation of the Wavenet generation process called Fast Wavenet. Compared to a naı̈ve implementation that has complexity O(2) (L denotes the number of layers in the network), our proposed approach removes redundant convolution operations by caching previous calculations, thereby reducing the complexity to O(L) time. Timing experiments show significant advantages of our fast implementation over a naı̈ve one. While this method is presented for Wavenet, the same scheme can be applied anytime one wants to perform autoregressive generation or online prediction using a model with dilated convolution layers. The code for our method is publicly available.", "title": "" }, { "docid": "38bb20a4be56f408a621b1e9e8e4bf6d", "text": "In the last five years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train/test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 20 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only 9 of these algorithms are significantly more accurate than both benchmarks and that one classifier, the Collective of Transformation Ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more rigorous testing of new algorithms in the future.", "title": "" } ]
scidocsrr
4b45b394051d30f8bc26f0e91f1c7148
Example-Tracing Tutors: Intelligent Tutor Development for Non-programmers
[ { "docid": "53e3b3c3bcc4fc3c4ddb2e3defcc78a2", "text": "The Cognitive Tutor Authoring Tools (CTAT) support creation of a novel type of tutors called example-tracing tutors. Unlike other types of ITSs (e.g., model-tracing tutors, constraint-based tutors), exampletracing tutors evaluate student behavior by flexibly comparing it against generalized examples of problemsolving behavior. Example-tracing tutors are capable of sophisticated tutoring behaviors; they provide step-bystep guidance on complex problems while recognizing multiple student strategies and (where needed) maintaining multiple interpretations of student behavior. They therefore go well beyond VanLehn’s (2006) minimum criterion for ITS status, namely, that the system has an inner loop (i.e., provides within-problem guidance, not just end-of-problem feedback). Using CTAT, example-tracing tutors can be created without programming. An author creates a tutor interface through drag-and-drop techniques, and then demonstrates the problem-solving behaviors to be tutored. These behaviors are recorded in a “behavior graph,” which can be easily edited and generalized. Compared to other approaches to programming by demonstration for ITS development, CTAT implements a simpler method (no machine learning is used) that is currently more pragmatic and proven for widespread, real-world use by non-programmers. Development time estimates from a large number of real-world ITS projects that have used CTAT suggest that example-tracing tutors reduce development cost by a factor of 4 to 8, compared to “historical” estimates of ITS development time and cost. The main contributions of the work are a novel ITS technology, based on the use of generalized behavioral examples to guide students in problem-solving exercises, as well as a suite of mature and robust tools for efficiently building real-world ITSs without programming.", "title": "" } ]
[ { "docid": "1dee4c916308295626bce658529a8e0e", "text": "Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs. One core idea of adversarial example research is to reveal neural network errors under such distribution shifts. We decompose these errors into two complementary sources: sensitivity and invariance. We show deep networks are not only too sensitive to task-irrelevant changes of their input, as is well-known from -adversarial examples, but are also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks. We show such excessive invariance occurs across various tasks and architecture types. On MNIST and ImageNet one can manipulate the class-specific content of almost any image without changing the hidden activations. We identify an insufficiency of the standard cross-entropy loss as a reason for these failures. Further, we extend this objective based on an informationtheoretic analysis so it encourages the model to consider all task-dependent features in its decision. This provides the first approach tailored explicitly to overcome excessive invariance and resulting vulnerabilities.", "title": "" }, { "docid": "7436bf163d0dcf6d2fbe8ccf66431caf", "text": "Zh h{soruh ehkdylrudo h{sodqdwlrqv iru vxe0rswlpdo frusrudwh lqyhvwphqw ghflvlrqv1 Irfxvlqj rq wkh vhqvlwlylw| ri lqyhvwphqw wr fdvk rz/ zh dujxh wkdw shuvrqdo fkdudfwhulvwlfv ri fklhi h{hfxwlyh r fhuv/ lq sduwlfxodu ryhufrq ghqfh/ fdq dffrxqw iru wklv zlghvsuhdg dqg shuvlvwhqw lqyhvwphqw glvwruwlrq1 Ryhufrq ghqw FHRv ryhuhvwlpdwh wkh txdolw| ri wkhlu lqyhvwphqw surmhfwv dqg ylhz h{whuqdo qdqfh dv xqgxo| frvwo|1 Dv d uhvxow/ wkh| lqyhvw pruh zkhq wkh| kdyh lqwhuqdo ixqgv dw wkhlu glvsrvdo1 Zh whvw wkh ryhufrq ghqfh k|srwkhvlv/ xvlqj gdwd rq shuvrqdo sruwirolr dqg frusrudwh lqyhvwphqw ghflvlrqv ri FHRv lq Iruehv 833 frpsdqlhv1 Zh fodvvli| FHRv dv ryhufrq ghqw li wkh| uhshdwhgo| idlo wr h{huflvh rswlrqv wkdw duh kljko| lq wkh prqh|/ ru li wkh| kdelwxdoo| dftxluh vwrfn ri wkhlu rzq frpsdq|1 Wkh pdlq uhvxow lv wkdw lqyhvwphqw lv vljql fdqwo| pruh uhvsrqvlyh wr fdvk rz li wkh FHR glvsod|v ryhufrq ghqfh1 Lq dgglwlrq/ zh lghqwli| shuvrqdo fkdudfwhulvwlfv rwkhu wkdq ryhufrq ghqfh +hgxfdwlrq/ hpsor|phqw edfnjurxqg/ frkruw/ plolwdu| vhuylfh/ dqg vwdwxv lq wkh frpsdq|, wkdw vwurqjo| d hfw wkh fruuhodwlrq ehwzhhq lqyhvwphqw dqg fdvk rz1", "title": "" }, { "docid": "e9a6e15a3d6b3da0b213c3627ccc695e", "text": "Appendix A. Detail of geometry constraint loss 0.0.1 Forward We consider four groups of bones, defined by G = {arm, leg, shoulder, hip}. Rarm = {left lower arm, left upper arm, right lower arm, right upper arm}, Rleg = {left lower leg, left upper leg, right lower leg, right upper leg}, Rshoulder = { left shoulder bone, left shoulder bone}, Rhip = {left hip bone, left hip bone}. A bone (e.g. left lower arm) is represented by the index of its two end-points, e = (jL, jR), i.e., left lower arm = (left wrist, left elbow). Let Y (j) 2D = (u , v), we have le = ||Y (jL) 3D , Y (jR) 3D ||", "title": "" }, { "docid": "b9404d66fa6cc759382c73d6ae16fc0c", "text": "Aspect extraction is an important and challenging task in aspect-based sentiment analysis. Existing works tend to apply variants of topic models on this task. While fairly successful, these methods usually do not produce highly coherent aspects. In this paper, we present a novel neural approach with the aim of discovering coherent aspects. The model improves coherence by exploiting the distribution of word co-occurrences through the use of neural word embeddings. Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space. In addition, we use an attention mechanism to de-emphasize irrelevant words during training, further improving the coherence of aspects. Experimental results on real-life datasets demonstrate that our approach discovers more meaningful and coherent aspects, and substantially outperforms baseline methods on several evaluation tasks.", "title": "" }, { "docid": "5551c139bf9bdb144fabce6a20fda331", "text": "A common prerequisite for a number of debugging and performanceanalysis techniques is the injection of auxiliary program code into the application under investigation, a process called instrumentation. To accomplish this task, source-code preprocessors are often used. Unfortunately, existing preprocessing tools either focus only on a very specific aspect or use hard-coded commands for instrumentation. In this paper, we examine which basic constructs are required to specify a user-defined routine entry/exit instrumentation. This analysis serves as a basis for a generic instrumentation component working on the source-code level where the instructions to be inserted can be flexibly configured. We evaluate the identified constructs with our prototypical implementation and show that these are sufficient to fulfill the needs of a number of todays’ performance-analysis tools.", "title": "" }, { "docid": "52ca97ccba94fe3d96ba0ef39551625d", "text": "The MACCEPA (Mechanically Adjustable Compliance and Controllable Equilibrium Position Actuator) is an electric actuator of which the compliance and equilibrium position are fully independently controllable and both are set by a dedicated servomotor. In this paper an improvement of the actuator is proposed where the torque-angle curve and consequently the stiffness-angle curve can be modified by choosing an appropriate shape of a profile disk, which replaces the lever arm of the former design. The actuator has a large joint angle, torque and stiffness range and these properties can be made beneficial for safe human robot interaction and the construction of energy efficient walking, hopping and running robots. The ability to store and release energy is shown by simulations on a 1DOF hopping robot. Its hopping height is much higher compared to a configuration in which the same motor is used in a traditional stiff setup. The stiffness of the actuator has a stiffening characteristic so the leg stiffness resembles more a linear stiffness as found in humans.", "title": "" }, { "docid": "79f1473d4eb0c456660543fda3a648f1", "text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.", "title": "" }, { "docid": "0737e99613b83104bc9390a46fbc4aeb", "text": "Natural language text exhibits hierarchical structure in a variety of respects. Ideally, we could incorporate our prior knowledge of this hierarchical structure into unsupervised learning algorithms that work on text data. Recent work by Nickel and Kiela (2017) proposed using hyperbolic instead of Euclidean embedding spaces to represent hierarchical data and demonstrated encouraging results when embedding graphs. In this work, we extend their method with a re-parameterization technique that allows us to learn hyperbolic embeddings of arbitrarily parameterized objects. We apply this framework to learn word and sentence embeddings in hyperbolic space in an unsupervised manner from text corpora. The resulting embeddings seem to encode certain intuitive notions of hierarchy, such as wordcontext frequency and phrase constituency. However, the implicit continuous hierarchy in the learned hyperbolic space makes interrogating the model’s learned hierarchies more difficult than for models that learn explicit edges between items. The learned hyperbolic embeddings show improvements over Euclidean embeddings in some – but not all – downstream tasks, suggesting that hierarchical organization is more useful for some tasks than others.", "title": "" }, { "docid": "8c853251e0fb408c829e6f99a581d4cf", "text": "We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.", "title": "" }, { "docid": "869e01855c8cfb9dc3e64f7f3e73cd60", "text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.", "title": "" }, { "docid": "dc2752eeee3e4ccf0bf6d912ef72b5a8", "text": "It takes more than water to restore a wetland. Now, scientists are documenting how landscape setting, habitat type, hydrological regime, soil properties, topography, nutrient supplies, disturbance regimes, invasive species, seed banks and declining biodiversity can constrain the restoration process. Although many outcomes can be explained post hoc, we have little ability to predict the path that sites will follow when restored in alternative ways, and no insurance that specific targets will be met. To become predictive, bolder approaches are now being developed, which rely more on field experimentation at multiple spatial and temporal scales, and in many restoration contexts.", "title": "" }, { "docid": "e22a3cd1887d905fffad0f9d14132ed6", "text": "Relativistic electron beam generation studies have been carried out in LIA-400 system through explosive electron emission for various cathode materials. This paper presents the emission properties of different cathode materials at peak diode voltages varying from 10 to 220 kV and at peak current levels from 0.5 to 2.2 kA in a single pulse duration of 160-180 ns. The cathode materials used are graphite, stainless steel, and red polymer velvet. The perveance data calculated from experimental waveforms are compared with 1-D Child Langmuir formula to obtain the cathode plasma expansion velocity for various cathode materials. Various diode parameters are subject to shot to shot variation analysis. Velvet cathode proves to be the best electron emitter because of its lower plasma expansion velocity and least shot to shot variability.", "title": "" }, { "docid": "83dd0cd815c79932e6ff8b1faf780ef2", "text": "Pattern recognition and registration are integral elements of computer vision, which considers image patterns. This thesis presents novel blur, and combined blur and geometric invariant features for pattern recognition and registration related to images. These global or local features are based on the Fourier transform phase, and are invariant or insensitive to image blurring with a centrally symmetric point spread function which can result, for example, from linear motion or out of focus. The global features are based on the even powers of the phase-only discrete Fourier spectrum or bispectrum of an image and are invariant to centrally symmetric blur. These global features are used for object recognition and image registration. The features are extended for geometrical invariances up to similarity transformation: shift invariance is obtained using bispectrum, and rotation-scale invariance using log-polar mapping of bispectrum slices. Affine invariance can be achieved as well using rotated sets of the log-log mapped bispectrum slices. The novel invariants are shown to be more robust to additive noise than the earlier blur, and combined blur and geometric invariants based on image moments. The local features are computed using the short term Fourier transform in local windows around the points of interest. Only the lowest horizontal, vertical, and diagonal frequency coefficients are used, the phase of which is insensitive to centrally symmetric blur. The phases of these four frequency coefficients are quantized and used to form a descriptor code for the local region. When these local descriptors are used for texture classification, they are computed for every pixel, and added up to a histogram which describes the local pattern. There are no earlier textures features which have been claimed to be invariant to blur. The proposed descriptors were superior in the classification of blurred textures compared to a few non-blur invariant state of the art texture classification methods.", "title": "" }, { "docid": "b425716ec96c3fadb73d6475d1278c06", "text": "This paper presents a transformerless single-phase inverter topology based on a modified H-bridge-based multilevel converter. The topology comprises two legs, namely, a usual two-level leg and a T-type leg. The latter is based on a usual two-level leg, which has been modified to gain access to the midpoint of the split dc-link by means of a bidirectional switch. The topology is referred as an asymmetrical T-type five-level (5L-T-AHB) inverter. An ad hoc modulation strategy based on sinusoidal pulsewidth modulation is also presented to control the 5L-T-AHB inverter, where the two-level leg is commuted at fundamental frequency. Numerical and experimental results show that the proposed 5L-T-AHB inverter achieves high efficiency, exhibits reduced leakage currents, and complies with the transformerless norms and regulations, which makes it suitable for the transformerless PV inverters market.11This updated version includes experimental evidence, considerations for practical implementation, efficiency studies, visualization of semiconductor losses distribution, a deeper and corrected common mode analysis, and an improved notation among other modifications.", "title": "" }, { "docid": "3100f5d0ed870be38770caf729798624", "text": "Our research objective is to facilitate the identification of true input manipulation vulnerabilities via the combination of static analysis, runtime detection, and automatic testing. We propose an approach for SQL injection vulnerability detection, automated by a prototype tool SQLInjectionGen. We performed case studies on two small web applications for the evaluation of our approach compared to static analysis for identifying true SQL injection vulnerabilities. In our case study, SQLInjectionGen had no false positives, but had a small number of false negatives while the static analysis tool had a false positive for every vulnerability that was actually protected by a white or black list.", "title": "" }, { "docid": "714641a148e9a5f02bb13d5485203d70", "text": "The aim of this paper is to present a review of recently used current control techniques for three-phase voltagesource pulsewidth modulated converters. Various techniques, different in concept, have been described in two main groups: linear and nonlinear. The first includes proportional integral stationary and synchronous) and state feedback controllers, and predictive techniques with constant switching frequency. The second comprises bang-bang (hysteresis, delta modulation) controllers and predictive controllers with on-line optimization. New trends in the current control—neural networks and fuzzy-logicbased controllers—are discussed, as well. Selected oscillograms accompany the presentation in order to illustrate properties of the described controller groups.", "title": "" }, { "docid": "fa3cbd4b20e69cc73f5293f1e0da6a98", "text": "The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. Z. Michalewicz, D.B. Fogel How to Solve It: Modern Heuristics", "title": "" }, { "docid": "af49fef0867a951366cfb21288eeb3ed", "text": "As a discriminative method of one-shot learning, Siamese deep network allows recognizing an object from a single exemplar with the same class label. However, it does not take the advantage of the underlying structure and relationship among a multitude of instances since it only relies on pairs of instances for training. In this paper, we propose a quadruplet deep network to examine the potential connections among the training instances, aiming to achieve a more powerful representation. We design four shared networks that receive multi-tuple of instances as inputs and are connected by a novel loss function consisting of pair-loss and tripletloss. According to the similarity metric, we select the most similar and the most dissimilar instances as the positive and negative inputs of triplet loss from each multi-tuple. We show that this scheme improves the training performance and convergence speed. Furthermore, we introduce a new weighted pair loss for an additional acceleration of the convergence. We demonstrate promising results for model-free tracking-by-detection of objects from a single initial exemplar in the Visual Object Tracking benchmark.", "title": "" }, { "docid": "27d9675f4296f455ade2c58b7f7567e8", "text": "In recent years, sharing economy has been growing rapidly. Meanwhile, understanding why people participate in sharing economy emerges as a rising concern. Given that research on sharing economy is scarce in the information systems literature, this paper aims to enrich the theoretical development in this area by testing different dimensions of convenience and risk that may influence people’s participation intention in sharing economy. We will also examine the moderate effects of two regulatory foci (i.e., promotion focus and prevention focus) on participation intention. The model will be tested with data of Uber users. Results of the study will help researchers and practitioners better understand people’s behavior in sharing economy.", "title": "" }, { "docid": "1dd4bed5dd52b18f39c0e96c0a14c153", "text": "Understanding the generalization of deep learning has raised lots of concerns recently, where the learning algorithms play an important role in generalization performance, such as stochastic gradient descent (SGD). Along this line, we particularly study the anisotropic noise introduced by SGD, and investigate its importance for the generalization in deep neural networks. Through a thorough empirical analysis, it is shown that the anisotropic diffusion of SGD tends to follow the curvature information of the loss landscape, and thus is beneficial for escaping from sharp and poor minima effectively, towards more stable and flat minima. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of positiondependent noise.", "title": "" } ]
scidocsrr
72c80e5ad6aaf500249500cae90658b4
The Web of False Information: Rumors, Fake News, Hoaxes, Clickbait, and Various Other Shenanigans
[ { "docid": "f2c6a7f205f1aa6b550418cd7e93f7d2", "text": "This paper addresses the problem of a single rumor source detection with multiple observations, from a statistical point of view of a spreading over a network, based on the susceptible-infectious model. For tree networks, multiple sequential observations for one single instance of rumor spreading cannot improve over the initial snapshot observation. The situation dramatically improves for multiple independent observations. We propose a unified inference framework based on the union rumor centrality, and provide explicit detection performance for degree-regular tree networks. Surprisingly, even with merely two observations, the detection probability at least doubles that of a single observation, and further approaches one, i.e., reliable detection, with increasing degree. This indicates that a richer diversity enhances detectability. For general graphs, a detection algorithm using a breadth-first search strategy is also proposed and evaluated. Besides rumor source detection, our results can be used in network forensics to combat recurring epidemic-like information spreading such as online anomaly and fraudulent email spams.", "title": "" }, { "docid": "5505f3e227ebba96e34e022bc59fe57a", "text": "Social media has quickly risen to prominence as a news source, yet lingering doubts remain about its ability to spread rumor and misinformation. Systematically studying this phenomenon, however, has been difficult due to the need to collect large-scale, unbiased data along with in-situ judgements of its accuracy. In this paper we present CREDBANK, a corpus designed to bridge this gap by systematically combining machine and human computation. Specifically, CREDBANK is a corpus of tweets, topics, events and associated human credibility judgements. It is based on the real-time tracking of more than 1 billion streaming tweets over a period of more than three months, computational summarizations of those tweets, and intelligent routings of the tweet streams to human annotators—within a few hours of those events unfolding on Twitter. In total CREDBANK comprises more than 60 million tweets grouped into 1049 real-world events, each annotated by 30 human annotators. As an example, with CREDBANK one can quickly calculate that roughly 24% of the events in the global tweet stream are not perceived as credible. We have made CREDBANK publicly available, and hope it will enable new research questions related to online information credibility in fields such as social science, data mining and health.", "title": "" }, { "docid": "df08803274492f2eb2fe92e69bc3b9e6", "text": "Wikipedia is a major source of information for many people. However, false information on Wikipedia raises concerns about its credibility. One way in which false information may be presented on Wikipedia is in the form of hoax articles, i.e., articles containing fabricated facts about nonexistent entities or events. In this paper we study false information on Wikipedia by focusing on the hoax articles that have been created throughout its history. We make several contributions. First, we assess the real-world impact of hoax articles by measuring how long they survive before being debunked, how many pageviews they receive, and how heavily they are referred to by documents on the Web. We find that, while most hoaxes are detected quickly and have little impact on Wikipedia, a small number of hoaxes survive long and are well cited across the Web. Second, we characterize the nature of successful hoaxes by comparing them to legitimate articles and to failed hoaxes that were discovered shortly after being created. We find characteristic differences in terms of article structure and content, embeddedness into the rest of Wikipedia, and features of the editor who created the hoax. Third, we successfully apply our findings to address a series of classification tasks, most notably to determine whether a given article is a hoax. And finally, we describe and evaluate a task involving humans distinguishing hoaxes from non-hoaxes. We find that humans are not good at solving this task and that our automated classifier outperforms them by a big margin.", "title": "" } ]
[ { "docid": "35cbd0797156630e7b3edf7cf76868c1", "text": "Given a bipartite graph of users and the products that they review, or followers and followees, how can we detect fake reviews or follows? Existing fraud detection methods (spectral, etc.) try to identify dense subgraphs of nodes that are sparsely connected to the remaining graph. Fraudsters can evade these methods using camouflage, by adding reviews or follows with honest targets so that they look “normal.” Even worse, some fraudsters use hijacked accounts from honest users, and then the camouflage is indeed organic.\n Our focus is to spot fraudsters in the presence of camouflage or hijacked accounts. We propose FRAUDAR, an algorithm that (a) is camouflage resistant, (b) provides upper bounds on the effectiveness of fraudsters, and (c) is effective in real-world data. Experimental results under various attacks show that FRAUDAR outperforms the top competitor in accuracy of detecting both camouflaged and non-camouflaged fraud. Additionally, in real-world experiments with a Twitter follower--followee graph of 1.47 billion edges, FRAUDAR successfully detected a subgraph of more than 4, 000 detected accounts, of which a majority had tweets showing that they used follower-buying services.", "title": "" }, { "docid": "74fa56730057ae21f438df46054041c4", "text": "Facial fractures can lead to long-term sequelae if not repaired. Complications from surgical approaches can be equally detrimental to the patient. Periorbital approaches via the lower lid can lead to ectropion, entropion, scleral show, canthal malposition, and lid edema.1–6 Ectropion can cause epiphora, whereas entropion often causes pain and irritation due to contact between the cilia and cornea. Transcutaneous and tranconjunctival approaches are commonly used to address fractures of the infraorbital rim and orbital floor. The transconjunctival approach is popular among otolaryngologists and ophthalmologists, whereas transcutaneous approaches are more commonly used by oral maxillofacial surgeons and plastic surgeons.7Ridgwayet al reported in theirmeta-analysis that lid complications are highest with the subciliary approach (19.1%) and lowest with transconjunctival approach (2.1%).5 Raschke et al also found a lower incidence of lower lid malpositionvia the transconjunctival approach comparedwith the subciliary approach.8 Regardless of approach, complications occur and thefacial traumasurgeonmustknowhowtomanage these issues. In this article, we will review the common complications of lower lid surgery and their treatment.", "title": "" }, { "docid": "9dff83d2915fc4a3149ab921d62f8182", "text": "This study proposes gaze-based hand interaction, which is helpful for improving the user’s immersion in the production process of virtual reality content for the mobile platform, and analyzes efficiency through an experiment using a questionnaire. First, three-dimensional interactive content is produced for use in the proposed interaction experiment while presenting an experiential environment that gives users a high sense of immersion in the mobile virtual reality environment. This is designed to induce the tension and concentration of users in line with the immersive virtual reality environment. Additionally, a hand interaction method based on gaze—which is mainly used for the entry of mobile virtual reality content—is proposed as a design method for immersive mobile virtual reality environment. The user satisfaction level of the immersive environment provided by the proposed gaze-based hand interaction is analyzed through experiments in comparison with the general method that uses gaze only. Furthermore, detailed analysis is conducted by dividing the effects of the proposed interaction method on user’s psychology into positive factors such as immersion and interest and negative factors such as virtual reality (VR) sickness and dizziness. In this process, a new direction is proposed for improving the immersion of users in the production of mobile platform virtual reality content.", "title": "" }, { "docid": "82b628f4ce9e3d4a7ef8db114340e191", "text": "Cervical cancer (CC) is a leading cause of death in women worldwide. Radiation therapy (RT) for CC is an effective alternative, but its toxicity remains challenging. Blueberry is amongst the most commonly consumed berries in the United States. We previously showed that resveratrol, a compound in red grapes, can be used as a radiosensitizer for prostate cancer. In this study, we found that the percentage of colonies, PCNA expression level and the OD value of cells from the CC cell line SiHa were all decreased in RT/Blueberry Extract (BE) group when compared to those in the RT alone group. Furthermore, TUNEL+ cells and the relative caspase-3 activity in the CC cells were increased in the RT/BE group compared to those in the RT alone group. The anti-proliferative effect of RT/BE on cancer cells correlated with downregulation of pro-proliferative molecules cyclin D and cyclin E. The pro-apoptotic effect of RT/BE correlated with upregulation of the pro-apoptotic molecule TRAIL. Thus, BE sensitizes SiHa cells to RT by inhibition of proliferation and promotion of apoptosis, suggesting that blueberry might be used as a potential radiosensitizer to treat CC.", "title": "" }, { "docid": "7e70955671d2ad8728fdba0fc3ec5548", "text": "Detection of drowsiness based on extraction of IMF’s from EEG signal using EMD process and characterizing the features using trained Artificial Neural Network (ANN) is introduced in this paper. Our subjects are 8 volunteers who have not slept for last 24 hour due to travelling. EEG signal was recorded when the subject is sitting on a chair facing video camera and are obliged to see camera only. ANN is trained using a utility made in Matlab to mark the EEG data for drowsy state and awaked state and then extract IMF’s of marked data using EMD to prepare feature inputs for Neural Network. Once the neural network is trained, IMFs of New subjects EEG Signals is given as input and ANN will give output in two different states i.e. ‘drowsy’ or ‘awake’. The system is tested on 8 different subjects and it provided good results with more than 84.8% of correct detection of drowsy states.", "title": "" }, { "docid": "ab07e92f052a03aac253fabadaea4ab3", "text": "As news is increasingly accessed on smartphones and tablets, the need for personalising news app interactions is apparent. We report a series of three studies addressing key issues in the development of adaptive news app interfaces. We first surveyed users' news reading preferences and behaviours; analysis revealed three primary types of reader. We then implemented and deployed an Android news app that logs users' interactions with the app. We used the logs to train a classifier and showed that it is able to reliably recognise a user according to their reader type. Finally we evaluated alternative, adaptive user interfaces for each reader type. The evaluation demonstrates the differential benefit of the adaptation for different users of the news app and the feasibility of adaptive interfaces for news apps.", "title": "" }, { "docid": "9d3cb5ae51c25bb059a7503d1212e6a5", "text": "People are generally unaware of the operation of the system of cognitive mechanisms that ameliorate their experience of negative affect (the psychological immune system), and thus they tend to overestimate the duration of their affective reactions to negative events. This tendency was demonstrated in 6 studies in which participants overestimated the duration of their affective reactions to the dissolution of a romantic relationship, the failure to achieve tenure, an electoral defeat, negative personality feedback, an account of a child's death, and rejection by a prospective employer. Participants failed to distinguish between situations in which their psychological immune systems would and would not be likely to operate and mistakenly predicted overly and equally enduring affective reactions in both instances. The present experiments suggest that people neglect the psychological immune system when making affective forecasts.", "title": "" }, { "docid": "b5360df245a0056de81c89945f581f14", "text": "The inability to cope successfully with the enormous stress of medical education may lead to a cascade of consequences at both a personal and professional level. The present study examined the short-term effects of an 8-week meditation-based stress reduction intervention on premedical and medical students using a well-controlled statistical design. Findings indicate that participation in the intervention can effectively (1) reduce self-reported state and trait anxiety, (2) reduce reports of overall psychological distress including depression, (3) increase scores on overall empathy levels, and (4) increase scores on a measure of spiritual experiences assessed at termination of intervention. These results (5) replicated in the wait-list control group, (6) held across different experiments, and (7) were observed during the exam period. Future research should address potential long-term effects of mindfulness training for medical and premedical students.", "title": "" }, { "docid": "64d72ffe736831266acde9726d6d039f", "text": "Recently, image caption which aims to generate a textual description for an image automatically has attracted researchers from various fields. Encouraging performance has been achieved by applying deep neural networks. Most of these works aim at generating a single caption which may be incomprehensive, especially for complex images. This paper proposes a topic-specific multi-caption generator, which infer topics from image first and then generate a variety of topic-specific captions, each of which depicts the image from a particular topic. We perform experiments on flickr8k, flickr30k and MSCOCO. The results show that the proposed model performs better than single-caption generator when generating topic-specific captions. The proposed model effectively generates diversity of captions under reasonable topics and they differ from each other in topic level.", "title": "" }, { "docid": "151fd47f87944978edfafb121b655ad8", "text": "We introduce a pair of tools, Rasa NLU and Rasa Core, which are open source python libraries for building conversational software. Their purpose is to make machine-learning based dialogue management and language understanding accessible to non-specialist software developers. In terms of design philosophy, we aim for ease of use, and bootstrapping from minimal (or no) initial training data. Both packages are extensively documented and ship with a comprehensive suite of tests. The code is available at https://github.com/RasaHQ/", "title": "" }, { "docid": "a1a800cf63f997501e1a35c0da0e075b", "text": "In this paper, an improved design of an ironless axial flux permanent magnet synchronous generator (AFPMSG) is presented for direct-coupled wind turbine application considering wind speed characteristics. The partial swarm optimization method is used to perform a multi-objective design optimization of the ironless AFPMSG in order to decrease the active material cost and increase the annual energy yield of the generator over the entire range of operating wind speed. General practical and mechanical limitations in the design of the generator are considered as optimization constraints. For accurate analytical design of the generator, distribution of the flux in all parts of the machine is obtained through a modified magnetic equivalent circuit model of AFPMSG. In this model, the magnetic saturation of the rotor back iron cores is considered using a nonlinear iterative algorithm. Various combinations of pole and coil numbers are studied in the design of a 30 kW AFPMSG via the optimization procedure. Finally, 3-D finite-element model of the generator was prepared to confirm the validity of the proposed design procedure and the generator performance for various wind speeds.", "title": "" }, { "docid": "6310989ad025f88412dc5d4ba7ad01af", "text": "The mobile network plays an important role in the evolution of humanity and society. However, due to the increase of users as well as of mobile applications, the current mobile network architecture faces many challenges. In this paper we describe V-Core, a new architecture for the mobile packet core network which is based on Software Defined Networking and Network Function Virtualization. Then, we introduce a MobileVisor which is a machine to slice the above mobile packet core network into different control platforms according to either different mobile operators or different technologies (e.g. 3G or 4G). With our architecture, the mobile network operators can reduce their costs for deployment and operation as well as use network resources efficiently.", "title": "" }, { "docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94", "text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.", "title": "" }, { "docid": "7150582a101e66442d1d44adf812fd8e", "text": "Introduction: Exfoliative Cheilitis (EC) is a rare chronic inflammatory condition affecting the vermillion border of the lips and characterized by excessive production of keratin. In this paper, a case of chronic EC that was relatively resistant to usual treatment is to be introduced. Also, we explain its clinical course and treatment procedures and measures. Case Presentation: We report a 25-year-old Iranian woman presented with desquamation of her lips. History, clinical feature and histopathological examination revealed a diagnosis of EC. Conclusions: Although treatment of EC in the present study had considerable success in the short-term follow-up, it should be noted, due to the unknown of the exact cause, no specific treatment or protocol has still been identified.", "title": "" }, { "docid": "8a4436b621021ca7553a408749e722fb", "text": "The main contribution of the paper is to develop a wearable arm band for safety and protection of women and girls. This objective is achieved by the analysis of physiological signal in conjunction with body position. The physiological signals that are analyzed are pulse rate sensor, vibration sensor and if there is any fault it additionally uses a fault detection sensor. Acquisition of raw data makes the Arduino controller function by activating the GPS to send alert messages via GSM and the wireless camera captures images and videos and sends images to the pre-decided contacts and also shares video calling to the family contact. The alarm is employed to alert the surroundings by its sound and meanwhile, she can also use a TAZER as a self-defense mechanism.", "title": "" }, { "docid": "5bc22b48b82b749f81c8ac95ababba83", "text": "Matrix factorization techniques have been frequently applied in many fields. Among them, nonnegative matrix factorization (NMF) has received considerable attention for it aims to find a parts-based, linear representations of nonnegative data. Recently, many researchers propose various manifold learning algorithms to enhance learning performance by considering the local manifold smoothness assumption. However, NMF does not consider the geometrical structure of data and the local manifold smoothness does not directly ensure the representations of the data point with different labels being dissimilar. In order to find a better representation of data, we propose a novel matrix decomposition method, called nonnegative matrix factorization with Regularizations (RNMF), which incorporates three appropriate regularizations: nonnegative matrix factorization, the local manifold smoothness and a rank constraint. The representations of data learned by RNMF tend to be discriminative and sparse. By learning a Mahalanobis distance space based on labeled data, RNMF can also be extended to a semi-supervised algorithm (semi-RNMF) which has an amazing improvement on clustering performance. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.", "title": "" }, { "docid": "c5d8eb0360af180023d61c665b010db8", "text": "Going beyond its role as an encyclopedia, Wikipedia becomes a global memory place for high-impact events, such as, natural disasters and manmade incidents, thus influencing collective memory, i.e., the way we remember the past. Due to the importance of collective memory for framing the assessment of new situations, our actions and value systems, its open construction and negotiation in Wikipedia is an important new cultural and societal phenomenon. The analysis of this phenomenon does not only promise new insights in collective memory. It is also an important foundation for technology, which more effectively complements the processes of human forgetting and remembering and better enables us to learn from the past. In this paper, we analyse the long-term dynamics of Wikipedia as a global memory place for high-impact events. This complements existing work in analysing the collective memory negotiation and construction process in Wikipedia directly following the event. In more detail, we are interested in catalysts for reviving memories, i.e., in the fuel that keeps memories of past events alive, interrupting the general trend for fast forgetting. For this purpose, we study the trigger of revisiting behavior for a large set of event pages by exploiting page views and time series analysis, as well as identify of most important catalyst features.", "title": "" }, { "docid": "ef9df30505ee9c593af81284293e58f9", "text": "The coding by which chromosomes represent candidate solutions is a fundamental design choice in a genetic algorithm. This paper describes a novel coding of spanning trees in a genetic algorithm for the degree-constrained minimum spanning tree problem. For a connected, weighted graph, this problem seeks to identify the shortest spanning tree whose degree does not exceed an upper bound k > 2. In the coding, chromosomes are strings of numerical weights associated with the target graph's vertices. The weights temporarily bias the graph's edge costs, and an extension of Prim's algorithm, applied to the biased costs, identifies the feasible spanning tree a chromosome represents. This decoding algorithm enforces the degree constraint, so that all chromosomes represent valid solutions and there is no need to discard, repair, or penalize invalid chromosomes. On a set of hard graphs whose unconstrained minimum spanning trees are of high degree, a genetic algorithm that uses this coding identifies degree-constrained minimum spanning trees that are on average shorter than those found by several competing algorithms.", "title": "" }, { "docid": "9d803b0ce1f1af621466b1d7f97b7edf", "text": "This research paper addresses the methodology and approaches to managing criminal computer forensic investigations in a law enforcement environment with management controls, operational controls, and technical controls. Management controls cover policy and standard operating procedures (SOP's), methodology, and guidance. Operational controls cover SOP requirements, seizing evidence, evidence handling, best practices, and education, training and awareness. Technical controls cover acquisition and analysis procedures, data integrity, rules of evidence, presenting findings, proficiency testing, and data archiving.", "title": "" } ]
scidocsrr
ca81f24123c493877baad0b45ac5fb1a
PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors
[ { "docid": "ac0875c0f01d32315f4ea63049d3a1e1", "text": "Point clouds provide a flexible and scalable geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. Hence, the design of intelligent computational models that act directly on point clouds is critical, especially when efficiency considerations or noise preclude the possibility of expensive denoising and meshing procedures. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv is differentiable and can be plugged into existing architectures. Compared to existing modules operating largely in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked or recurrently applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. Beyond proposing this module, we provide extensive evaluation and analysis revealing that EdgeConv captures and exploits fine-grained geometric properties of point clouds. The proposed approach achieves state-of-the-art performance on standard benchmarks including ModelNet40 and S3DIS. ∗Equal Contribution", "title": "" }, { "docid": "395362cb22b0416e8eca67ec58907403", "text": "This paper presents an approach for labeling objects in 3D scenes. We introduce HMP3D, a hierarchical sparse coding technique for learning features from 3D point cloud data. HMP3D classifiers are trained using a synthetic dataset of virtual scenes generated using CAD models from an online database. Our scene labeling system combines features learned from raw RGB-D images and 3D point clouds directly, without any hand-designed features, to assign an object label to every 3D point in the scene. Experiments on the RGB-D Scenes Dataset v.2 demonstrate that the proposed approach can be used to label indoor scenes containing both small tabletop objects and large furniture pieces.", "title": "" }, { "docid": "348a5c33bde53e7f9a1593404c6589b4", "text": "Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.", "title": "" } ]
[ { "docid": "665fcc17971dc34ed6f89340e3b7bfe2", "text": "Central to the development of computer vision systems is the collection and use of annotated images spanning our visual world. Annotations may include information about the identity, spatial extent, and viewpoint of the objects present in a depicted scene. Such a database is useful for the training and evaluation of computer vision systems. Motivated by the availability of images on the Internet, we introduced a web-based annotation tool that allows online users to label objects and their spatial extent in images. To date, we have collected over 400 000 annotations that span a variety of different scene and object classes. In this paper, we show the contents of the database, its growth over time, and statistics of its usage. In addition, we explore and survey applications of the database in the areas of computer vision and computer graphics. Particularly, we show how to extract the real-world 3-D coordinates of images in a variety of scenes using only the user-provided object annotations. The output 3-D information is comparable to the quality produced by a laser range scanner. We also characterize the space of the images in the database by analyzing 1) statistics of the co-occurrence of large objects in the images and 2) the spatial layout of the labeled images.", "title": "" }, { "docid": "036c7fc778fdacfd2595c65dc7af9f78", "text": "A new transformerless buck–boost converterwith simple structure is proposed in this study. Comparedwith the traditional buck–boost converter, the proposedbuck–boost converter’s voltage gain is squared times of theformer’s and its output voltage polarity is positive. Theseadvantages enable it to work in a wider range of positiveoutput. The two power switches of the proposed buck–boost converter operate synchronously. In the continuousconduction mode (CCM), two inductors are magnetized andtwo capacitors are discharged during the switch-on period,while two inductors are demagnetized and two capacitorsare charged during the switch-off period. The power electronicssimulator (PSIM) and the circuit experiments are providedto validate the effectiveness of the proposed buck–boostconverter.", "title": "" }, { "docid": "466c537fca72aaa1e9cda2dc22c0f504", "text": "This paper presents a single-phase grid-connected photovoltaic (PV) module-integrated converter (MIC) based on cascaded quasi-Z-source inverters (qZSI). In this system, each qZSI module serves as an MIC and is connected to one PV panel. Due to the cascaded structure and qZSI topology, the proposed MIC features low-voltage gain requirement, single-stage energy conversion, enhanced reliability, and good output power quality. Furthermore, the enhancement mode gallium nitride field-effect transistors (eGaN FETs) are employed in the qZSI module for efficiency improvement at higher switching frequency. It is found that the qZSI is very suitable for the application of eGaN FETs because of the shoot-through capability. Optimized module design is developed based on the derived qZSI ac equivalent model and power loss analytical model to achieve high efficiency and high power density. A design example of qZSI module is presented for a 250-W PV panel with 25-50-V output voltage. The simulation and experimental results prove the validity of the analytical models. The final module prototype design achieves up to 98.06% efficiency with 100-kHz switching frequency.", "title": "" }, { "docid": "9077dede1c2c4bc4b696a93e01c84f52", "text": "Reliable continuous core temperature measurement is of major importance for monitoring patients. The zero heat flux method (ZHF) can potentially fulfil the requirements of non-invasiveness, reliability and short delay time that current measurement methods lack. The purpose of this study was to determine the performance of a new ZHF device on the forehead regarding these issues. Seven healthy subjects performed a protocol of 10 min rest, 30 min submaximal exercise (average temperature increase about 1.5 °C) and 10 min passive recovery in ambient conditions of 35 °C and 50% relative humidity. ZHF temperature (T(zhf)) was compared to oesophageal (T(es)) and rectal (T(re)) temperature. ΔT(zhf)-T(es) had an average bias ± standard deviation of 0.17 ± 0.19 °C in rest, -0.05 ± 0.18 °C during exercise and -0.01 ± 0.20 °C during recovery, the latter two being not significant. The 95% limits of agreement ranged from -0.40 to 0.40 °C and T(zhf) had hardly any delay compared to T(es). T(re) showed a substantial delay and deviation from T(es) when core temperature changed rapidly. Results indicate that the studied ZHF sensor tracks T(es) very well in hot and stable ambient conditions and may be a promising alternative for reliable non-invasive continuous core temperature measurement in hospital.", "title": "" }, { "docid": "dea6ad0e1985260dbe7b70cef1c5da54", "text": "The commonest mitochondrial diseases are probably those impairing the function of complex I of the respiratory electron transport chain. Such complex I impairment may contribute to various neurodegenerative disorders e.g. Parkinson's disease. In the following, using hepatocytes as a model cell, we have shown for the first time that the cytotoxicity caused by complex I inhibition by rotenone but not that caused by complex III inhibition by antimycin can be prevented by coenzyme Q (CoQ1) or menadione. Furthermore, complex I inhibitor cytotoxicity was associated with the collapse of the mitochondrial membrane potential and reactive oxygen species (ROS) formation. ROS scavengers or inhibitors of the mitochondrial permeability transition prevented cytotoxicity. The CoQ1 cytoprotective mechanism required CoQ1 reduction by DT-diaphorase (NQO1). Furthermore, the mitochondrial membrane potential and ATP levels were restored at low CoQ1 concentrations (5 microM). This suggests that the CoQ1H2 formed by NQO1 reduced complex III and acted as an electron bypass of the rotenone block. However cytoprotection still occurred at higher CoQ1 concentrations (>10 microM), which were less effective at restoring ATP levels but readily restored the cellular cytosolic redox potential (i.e. lactate: pyruvate ratio) and prevented ROS formation. This suggests that CoQ1 or menadione cytoprotection also involves the NQO1 catalysed reoxidation of NADH that accumulates as a result of complex I inhibition. The CoQ1H2 formed would then also act as a ROS scavenger.", "title": "" }, { "docid": "4a1263b1cc76aed4913e258b5a145927", "text": "Numerous applications in computer vision and machine learning rely on representations of data that are compact, discriminative, and robust while satisfying several desirable invariances. One such recently successful representation is offered by symmetric positive definite (SPD) matrices. However, the modeling power of SPD matrices comes at a price: rather than a flat Euclidean view, SPD matrices are more naturally viewed through curved geometry (Riemannian or otherwise) which often complicates matters. We focus on models and algorithms that rely on the geometry of SPD matrices, and make our discussion concrete by casting it in terms of covariance descriptors for images. We summarize various commonly used distance metrics on SPD matrices, before highlighting formulations and algorithms for solving sparse coding and dictionary learning problems involving SPD data. Through empirical results, we showcase the benefits of mathematical models that exploit the curved geometry of SPD data across a diverse set of computer vision applications.", "title": "" }, { "docid": "85b885986958b388b7fda7ca2426a583", "text": "To reduce the risk of catheter-associated urinary tract infection (CAUTI), limiting use of indwelling catheters is encouraged with alternative collection methods and early removal. Adverse effects associated with such practices have not been described. We also determined if CAUTI preventative measures increase the risk of catheter-related complications. We hypothesized that there are complications associated with early removal of indwelling catheters. We described complications associated with indwelling catheterization and intermittent catheterization, and compared complication rates before and after policy updates changed catheterization practices. We performed retrospective cohort analysis of trauma patients admitted between August 1, 2009, and December 31, 2013 who required indwelling catheter. Associations between catheter days and adverse outcomes such as infection, bladder overdistention injury, recatheterization, urinary retention, and patients discharged with indwelling catheter were evaluated. The incidence of CAUTI and the total number of catheter days pre and post policy change were similar. The incidence rate of urinary retention and associated complications has increased since the policy changed. Practices intended to reduce the CAUTI rate are associated with unintended complications, such as urinary retention. Patient safety and quality improvement programs should monitor all complications associated with urinary catheterization practices, not just those that represent financial penalties.", "title": "" }, { "docid": "432fe001ec8f1331a4bd033e9c49ccdf", "text": "Recently, methods based on local image features have shown promise for texture and object recognition tasks. This paper presents a large-scale evaluation of an approach that represents images as distributions (signatures or histograms) of features extracted from a sparse set of keypoint locations and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the χ2 distance. We first evaluate the performance of our approach with different keypoint detectors and descriptors, as well as different kernels and classifiers. We then conduct a comparative evaluation with several state-of-the-art recognition methods on four texture and five object databases. On most of these databases, our implementation exceeds the best reported results and achieves comparable performance on the rest. Finally, we investigate the influence of background correlations on recognition performance via extensive tests on the PASCAL database, for which ground-truth object localization information is available. Our experiments demonstrate that image representations based on distributions of local features are surprisingly effective for classification of texture and object images under challenging real-world conditions, including significant intra-class variations and substantial background clutter.", "title": "" }, { "docid": "4ee17de5de87d923fafc9dbbe7266f2b", "text": "Introduction Researchers have agreed that a favorable corporate reputation is one of the most important intangible assets driving company performance (Chun 2005; Fisher-Buttinger and Vallaster 2011; Gibson et al. 2006). Not to be confused with brand identity and image, corporate reputation is often defined as consumers’ accumulated opinions, perceptions, and attitudes towards the company (Fombrun et al. 2000; Fombrun and Shanley 1990; Hatch and Schultz 2001; Weigelt and Camerer 1988). In addition, corporate reputation is established by individuals’ relative perspective; thus, corporate reputation is closely linked to the consumers’ subjective evaluation about the company (Fombrun and Shanley 1990; Weigelt and Camerer 1988). The effect of corporate reputation on corporate performance has been supported in many articles. Earlier studies have reported that a positive reputation has a significant Abstract", "title": "" }, { "docid": "a95094552dad7270bdaa73e2c7351ab4", "text": "Unlike most domestic livestock species, sheep are widely known as an animal with marked seasonality of breeding activity. The annual cycle of daily photoperiod has been identified as the determinant factor of this phenomenon, while environmental temperature, nutritional status, social interactions, lambing date and lactation period are considered to modulate it. The aim of this paper is to review the current state of knowledge of the reproductive seasonality in sheep. Following general considerations concerning the importance of seasonal breeding as a reproductive strategy for the survival of species, the paper describes the manifestations of seasonality in both the ram and the ewe. Both determinant and modulating factors are developed and special emphasis is given to the neuroendocrine base of photoperiodic regulation of seasonal breeding. Other aspects such as the role of melatonin, the involvement of thyroid hormones and the concept of photorefractoriness are also reviewed. © 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "4002b79aac3ab479451006b66723b766", "text": "Wearable devices have recently received considerable interest due to their great promise for a plethora of applications. Increased research efforts are oriented towards a non-invasive monitoring of human health as well as activity parameters. A wide range of wearable sensors are being developed for real-time non-invasive monitoring. This paper provides a comprehensive review of sensors used in wrist-wearable devices, methods used for the visualization of parameters measured as well as methods used for intelligent analysis of data obtained from wrist-wearable devices. In line with this, the main features of commercial wrist-wearable devices are presented. As a result of this review, a taxonomy of sensors, functionalities, and methods used in non-invasive wrist-wearable devices was assembled.", "title": "" }, { "docid": "92e61ad424b421a5621d490bf664b28f", "text": "Papers and patents that deal with polymorphism (crystal systems for which a substance can exist in structures defined by different unit cells and where each of the forms has the same elemental composition) and solvatomorphism (systems where the crystal structures of the substance are defined by different unit cells but where these unit cells differ in their elemental composition through the inclusion of one or molecules of solvent) have been summarized in an annual review. The works cited in this review were published during 2010 and were drawn from the major physical, crystallographic, and pharmaceutical journals. The review is divided into sections that cover articles of general interest, computational and theoretical studies, preparative and isolation methods, structural characterization and properties of polymorphic and solvatomorphic systems, studies of phase transformations, effects associated with secondary processing, and US patents issued during 2010.", "title": "" }, { "docid": "23ce33bb6ffbbd8f598cdcd0498d7828", "text": "Artificial neural networks (ANNs) are a promising machine learning technique in classifying non-linear electrocardiogram (ECG) signals and recognizing abnormal patterns suggesting risks of cardiovascular diseases (CVDs). In this paper, we propose a new reusable neuron architecture (RNA) enabling a performance-efficient and cost-effective silicon implementation for ANN. The RNA architecture consists of a single layer of physical RNA neurons, each of which is designed to use minimal hardware resource (e.g., a single 2-input multiplier-accumulator is used to compute the dot product of two vectors). By carefully applying the principal of time sharing, RNA can multiplexs this single layer of physical neurons to efficiently execute both feed-forward and back-propagation computations of an ANN while conserving the area and reducing the power dissipation of the silicon. A three-layer 51-30-12 ANN is implemented in RNA to perform the ECG classification for CVD detection. This RNA hardware also allows on-chip automatic training update. A quantitative design space exploration in area, power dissipation, and execution speed between RNA and three other implementations representative of different reusable hardware strategies is presented and discussed. Compared with an equivalent software implementation in C executed on an embedded microprocessor, the RNA ASIC achieves three orders of magnitude improvements in both the execution speed and the energy efficiency.", "title": "" }, { "docid": "f5d92a445b2d4ecfc55393794258582c", "text": "This paper presents a multi-modulus frequency divider (MMD) based on the Extended True Single-Phase Clock (E-TSPC) Logic. The MMD consists of four cascaded divide-by-2/3 E-TSPC cells. The basic functionality of the MMD and the E-TSPC 2/3 divider are explained. The whole design was implemented in an [0.13] m CMOS process from IBM. Simulation and measurement results of the MMD are shown. Measurement results indicates a maximum operating frequency of [10] GHz and a power consumption of [4] mW for each stage. These results are compared to other state of the art dual modulus E-TSPC dividers, showing the good position of this design relating to operating frequency and power consumption.", "title": "" }, { "docid": "e0fe79c4df207826ae9946031884a603", "text": "Document image processing is a crucial process in office automation and begins at the ‘OCR’ phase with difficulties in document ‘analysis’ and ‘understanding’. This paper presents a hybrid and comprehensive approach to document structure analysis. Hybrid in the sense that it makes use of layout (geometrical) as well as textual features of a given document. These features are the base for potential conditions which in turn are used to express fuzzy matched rules of an underlying rule base. Rules can be formulated based on features which might be observed within one specific layout object. However, rules can also express dependencies between different layout objects. In addition to its rule driven analysis, which allows an easy adaptation to specific domains with their specific logical objects, the system contains domain-independent markup algorithms for common objects (e.g., lists).", "title": "" }, { "docid": "31c2dc8045f43c7bf1aa045e0eb3b9ad", "text": "This paper addresses the task of functional annotation of genes from biomedical literature. We view this task as a hierarchical text categorization problem with Gene Ontology as a class hierarchy. We present a novel global hierarchical learning approach that takes into account the semantics of a class hierarchy. This algorithm with AdaBoost as the underlying learning procedure significantly outperforms the corresponding “flat” approach, i.e. the approach that does not consider any hierarchical information. In addition, we propose a novel hierarchical evaluation measure that gives credit to partially correct classification and discriminates errors by both distance and depth in a class hierarchy.", "title": "" }, { "docid": "dc7262a2e046bd5f633e9f5fbb5f1830", "text": "We investigate a dual-annular-ring CMUT array configuration for forward-looking intravascular ultrasound (FL-IVUS) imaging. The array consists of separate, concentric transmit and receive ring arrays built on the same silicon substrate. This configuration has the potential for independent optimization of each array and uses the silicon area more effectively without any particular drawback. We designed and fabricated a 1 mm diameter test array which consists of 24 transmit and 32 receive elements. We investigated synthetic phased array beamforming with a non-redundant subset of transmit-receive element pairs of the dual-annular-ring array. For imaging experiments, we designed and constructed a programmable FPGA-based data acquisition and phased array beamforming system. Pulse-echo measurements along with imaging simulations suggest that dual-ring-annular array should provide performance suitable for real-time FL-IVUS applications", "title": "" }, { "docid": "81273c11eb51349d0027e2ff2e54c080", "text": "The ground-volume separation of radar scattering plays an important role in the analysis of forested scenes. For this purpose, the data covariance matrix of multi-polarimetric (MP) multi-baseline (MB) SAR surveys can be represented thru a sum of two Kronecker products composed of the data covariance matrices and polarimetric signatures that correspond to the ground and canopy scattering mechanisms (SMs), respectively. The sum of Kronecker products (SKP) decomposition allows the use of different tomographic SAR focusing methods on the ground and canopy structural components separately, nevertheless, the main drawback of this technique relates to the rank-deficiencies of the resultant data covariance matrices, which restrict the usage of the adaptive beamforming techniques, requiring more advanced beamforming methods, such as compressed sensing (CS). This paper proposes a modification of the nonparametric iterative adaptive approach for amplitude and phase estimation (IAA-APES), which applied to MP-MB SAR data, serves as an alternative to the SKP-based techniques for ground-volume reconstruction, which main advantage relates precisely to the non-need of the SKP decomposition technique as a pre-processing step.", "title": "" }, { "docid": "4c8cff1e750c8f4c9fc42df7113e0212", "text": "Misdiagnosis is frequent in scabies of infants and children because of a low index of suspicion, secondary eczematous changes, and inappropriate therapy. Topical or systemic corticosteroids may modify the clinical presentation of scabies and that situation is referred to as scabies incognito. We describe a 10-month-old infant with scabies incognito mimicking urticaria pigmentosa.", "title": "" }, { "docid": "88119e4a825f48b89d87021a8ae4b713", "text": "It is anticipated that in the near future, social robots will become integral part of schools to enhance student learning experience. This paper reports students' experience through a new robotics competition with social robots for primary and secondary school students. The Junior category of the World Robot Summit (WRS) offers a new robotics competition for students focusing on the Co-Bot experience (human-robot co-existence), hosted by the Japan Ministry of Economy, Trade, and Industry (METI) and the New Energy and Industrial Technology Development Organization (NEDO). The social robot that is used in the World Robot Summit is Pepper, a sophisticated humanoid robot, offered by SoftBank Robotics. This paper focuses on students' experience as active users of social robots through the School Robot Challenge Workshop and Trial 2017 held in Tokyo, Japan in August 2017, where they programmed and/or developed solutions for the tasks using social robots. There were 13 teams from various countries participated in the two-and-half day workshop to learn to program Pepper, and two-day trial competition where they demonstrated their work. During the two-day trial, although there were some technical glitches, all 13 teams demonstrated their solutions and performance they developed with their Co-Bot ideas developed during the workshop.", "title": "" } ]
scidocsrr
a7612de005b4f9305c60c15ae8092310
Adjuvant capecitabine and oxaliplatin for gastric cancer after D2 gastrectomy (CLASSIC): a phase 3 open-label, randomised controlled trial
[ { "docid": "3cceb3792d55bd14adb579bb9e3932ec", "text": "BACKGROUND\nTrastuzumab, a monoclonal antibody against human epidermal growth factor receptor 2 (HER2; also known as ERBB2), was investigated in combination with chemotherapy for first-line treatment of HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nMETHODS\nToGA (Trastuzumab for Gastric Cancer) was an open-label, international, phase 3, randomised controlled trial undertaken in 122 centres in 24 countries. Patients with gastric or gastro-oesophageal junction cancer were eligible for inclusion if their tumours showed overexpression of HER2 protein by immunohistochemistry or gene amplification by fluorescence in-situ hybridisation. Participants were randomly assigned in a 1:1 ratio to receive a chemotherapy regimen consisting of capecitabine plus cisplatin or fluorouracil plus cisplatin given every 3 weeks for six cycles or chemotherapy in combination with intravenous trastuzumab. Allocation was by block randomisation stratified by Eastern Cooperative Oncology Group performance status, chemotherapy regimen, extent of disease, primary cancer site, and measurability of disease, implemented with a central interactive voice recognition system. The primary endpoint was overall survival in all randomised patients who received study medication at least once. This trial is registered with ClinicalTrials.gov, number NCT01041404.\n\n\nFINDINGS\n594 patients were randomly assigned to study treatment (trastuzumab plus chemotherapy, n=298; chemotherapy alone, n=296), of whom 584 were included in the primary analysis (n=294; n=290). Median follow-up was 18.6 months (IQR 11-25) in the trastuzumab plus chemotherapy group and 17.1 months (9-25) in the chemotherapy alone group. Median overall survival was 13.8 months (95% CI 12-16) in those assigned to trastuzumab plus chemotherapy compared with 11.1 months (10-13) in those assigned to chemotherapy alone (hazard ratio 0.74; 95% CI 0.60-0.91; p=0.0046). The most common adverse events in both groups were nausea (trastuzumab plus chemotherapy, 197 [67%] vs chemotherapy alone, 184 [63%]), vomiting (147 [50%] vs 134 [46%]), and neutropenia (157 [53%] vs 165 [57%]). Rates of overall grade 3 or 4 adverse events (201 [68%] vs 198 [68%]) and cardiac adverse events (17 [6%] vs 18 [6%]) did not differ between groups.\n\n\nINTERPRETATION\nTrastuzumab in combination with chemotherapy can be considered as a new standard option for patients with HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nFUNDING\nF Hoffmann-La Roche.", "title": "" } ]
[ { "docid": "5d44349955d07a212bc11f6edfaec8b0", "text": "This investigation develops an innovative algorithm for multiple autonomous unmanned aerial vehicle (UAV) mission routing. The concept of a UAV Swarm Routing Problem (SRP) as a new combinatorics problem, is developed as a variant of the Vehicle Routing Problem with Time Windows (VRPTW). Solutions of SRP problem model result in route assignments per vehicle that successfully track to all targets, on time, within distance constraints. A complexity analysis and multi-objective formulation of the VRPTW indicates the necessity of a stochastic solution approach leading to a multi-objective evolutionary algorithm. A full problem definition of the SRP as well as a multi-objective formulation parallels that of the VRPTW method. Benchmark problems for the VRPTW are modified in order to create SRP benchmarks. The solutions show the SRP solutions are comparable or better than the same VRPTW solutions, while also representing a more realistic UAV swarm routing solution.", "title": "" }, { "docid": "adb64a513ab5ddd1455d93fc4b9337e6", "text": "Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.", "title": "" }, { "docid": "d3d6a1793ce81ba0f4f0ffce0477a0ec", "text": "Portable Document Format (PDF) is one of the widely-accepted document format. However, it becomes one of the most attractive targets for exploitation by malware developers and vulnerability researchers. Malicious PDF files can be used in Advanced Persistent Threats (APTs) targeting individuals, governments, and financial sectors. The existing tools such as intrusion detection systems (IDSs) and antivirus packages are inefficient to mitigate this kind of attacks. This is because these techniques need regular updates with the new malicious PDF files which are increasing every day. In this paper, a new algorithm is presented for detecting malicious PDF files based on data mining techniques. The proposed algorithm consists of feature selection stage and classification stage. The feature selection stage is used to the select the optimum number of features extracted from the PDF file to achieve high detection rate and low false positive rate with small computational overhead. Experimental results show that the proposed algorithm can achieve 99.77% detection rate, 99.84% accuracy, and 0.05% false positive rate.", "title": "" }, { "docid": "24b970bdf722a0b036acf9c81a846598", "text": "We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB).", "title": "" }, { "docid": "fd3c142e5cf4761fdd541e0d8e6c584f", "text": "In this paper, we quantify the potential gains of hybrid CDN-P2P for two of the leading CDN companies, Akamai and Limelight. We first develop novel measurement methodology for mapping the topologies of CDN networks. We then consider ISP-friendly P2P distribution schemes which work in conjunction with the CDNs to localize traffic within regions of ISPs. To evaluate these schemes, we use two recent, real-world traces: a video-on-demand trace and a large-scale software update trace. We find that hybrid CDN-P2P can significantly reduce the cost of content distribution, even when peer sharing is localized within ISPs and further localized within regions of ISPs. We conclude that hybrid CDN-P2P distribution can economically satisfy the exponential growth of Internet video content without placing an unacceptable burden on regional ISPs.", "title": "" }, { "docid": "35fbdf776186afa7d8991fa4ff22503d", "text": "Lang Linguist Compass 2016; 10: 701–719 wileyo Abstract Research and industry are becoming more and more interested in finding automatically the polarised opinion of the general public regarding a specific subject. The advent of social networks has opened the possibility of having access to massive blogs, recommendations, and reviews. The challenge is to extract the polarity from these data, which is a task of opinion mining or sentiment analysis. The specific difficulties inherent in this task include issues related to subjective interpretation and linguistic phenomena that affect the polarity of words. Recently, deep learning has become a popular method of addressing this task. However, different approaches have been proposed in the literature. This article provides an overview of deep learning for sentiment analysis in order to place these approaches in context.", "title": "" }, { "docid": "357576d56c379c2b5d4365ad1412ff92", "text": "Malware authors employ a myriad of evasion techniques to impede automated reverse engineering and static analysis efforts. The most popular technologies include ‘code obfuscators’ that serve to rewrite the original binary code to an equivalent form that provides identical functionality while defeating signature-based detection systems. These systems significantly complicate static analysis, making it challenging to uncover the malware intent and the full spectrum of embedded capabilities. While code obfuscation techniques are commonly integrated into contemporary commodity packers, from the perspective of a reverse engineer, deobfuscation is often a necessary step that must be conducted independently after unpacking the malware binary. In this paper, we describe a set of techniques for automatically unrolling the impact of code obfuscators with the objective of completely recovering the original malware logic. We have implemented a set of generic debofuscation rules as a plug-in for the popular IDA Pro disassembler. We use sophisticated obfuscation strategies employed by two infamous malware instances from 2009, Conficker C and Hydraq (the binary associated with the Aurora attack) as case studies. In both instances our deobfuscator enabled a complete decompilation of the underlying code logic. This work was instrumental in the comprehensive reverse engineering of the heavily obfuscated P2P protocol embedded in the Conficker worm. The plug-in is integrated with the HexRays decompiler to provide a complete reverse engineering of malware binaries from binary form to C code and is available for free download on the SRI malware threat center website: http://www.mtc.sri.com/deobfuscation/.", "title": "" }, { "docid": "a96209a2f6774062537baff5d072f72f", "text": "In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.", "title": "" }, { "docid": "7db739a603bf9a2eba64c76a6211f4d6", "text": "While traditional multimedia applications such as games and videos are still popular, there has been a significant interest in the recent years towards new 3D media such as 3D immersion and Virtual Reality (VR) applications, especially 360 VR videos. 360 VR video is an immersive spherical video where the user can look around during playback. Unfortunately, 360 VR videos are extremely bandwidth intensive, and therefore are difficult to stream at acceptable quality levels. In this paper, we propose an adaptive bandwidth-efficient 360 VR video streaming system using a divide and conquer approach. We propose a dynamic view-aware adaptation technique to tackle the huge bandwidth demands of 360 VR video streaming. We spatially divide the videos into multiple tiles while encoding and packaging, use MPEG-DASH SRD to describe the spatial relationship of tiles in the 360-degree space, and prioritize the tiles in the Field of View (FoV). In order to describe such tiled representations, we extend MPEG-DASH SRD to the 3D space of 360 VR videos. We spatially partition the underlying 3D mesh, and construct an efficient 3D geometry mesh called hexaface sphere to optimally represent a tiled 360 VR video in the 3D space. Our initial evaluation results report up to 72% bandwidth savings on 360 VR video streaming with minor negative quality impacts compared to the baseline scenario when no adaptations is applied.", "title": "" }, { "docid": "dc259f1208eac95817d067b9cd13fa7c", "text": "This paper introduces a novel approach to texture synthesis based on generative adversarial networks (GAN) (Goodfellow et al., 2014). We extend the structure of the input noise distribution by constructing tensors with different types of dimensions. We call this technique Periodic Spatial GAN (PSGAN). The PSGAN has several novel abilities which surpass the current state of the art in texture synthesis. First, we can learn multiple textures from datasets of one or more complex large images. Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset. In addition, we can also accurately learn periodical textures. We make multiple experiments which show that PSGANs can flexibly handle diverse texture and image data sources. Our method is highly scalable and it can generate output images of arbitrary large size.", "title": "" }, { "docid": "237345020161bab7ce0b0bba26c5cc98", "text": "This paper addresses the difficulty of designing 1-V capable analog circuits in standard digital complementary metal–oxide–semiconductor (CMOS) technology. Design techniques for facilitating 1-V operation are discussed and 1-V analog building block circuits are presented. Most of these circuits use the bulk-driving technique to circumvent the metal– oxide–semiconductor field-effect transistor turn-on (threshold) voltage requirement. Finally, techniques are combined within a 1-V CMOS operational amplifier with rail-to-rail input and output ranges. While consuming 300 W, the 1-V rail-to-rail CMOS op amp achieves 1.3-MHz unity-gain frequency and 57 phase margin for a 22-pF load capacitance.", "title": "" }, { "docid": "d2b3ef20c0f8e1390a6b0adc15600c1c", "text": "Health is an important topic in HCI research with an increasing amount of health risks surrounding individuals and society at large. It is well known that smoking cigarettes can have serious health implications. The importance of this problem motivates investigation into the use of technology to encourage behavior change. Our study was designed to gather empirical knowledge about the role a \"quitting app\" can play in persuading people to quit smoking. Our purpose-built app Quitty introduces different content types from different content sources to study how they are perceived and motivate health behavior change. Findings from our field study show that tailored content and push-messages are considered the most important for persuading people to stop smoking. Based on our empirical findings, we propose six guidelines on how to design mobile applications to persuade smokers to quit.", "title": "" }, { "docid": "1f7d0ccae4e9f0078eabb9d75d1a8984", "text": "A social network is composed by communities of individuals or organizations that are connected by a common interest. Online social networking sites like Twitter, Facebook and Orkut are among the most visited sites in the Internet. Presently, there is a great interest in trying to understand the complexities of this type of network from both theoretical and applied point of view. The understanding of these social network graphs is important to improve the current social network systems, and also to develop new applications. Here, we propose a friend recommendation system for social network based on the topology of the network graphs. The topology of network that connects a user to his friends is examined and a local social network called Oro-Aro is used in the experiments. We developed an algorithm that analyses the sub-graph composed by a user and all the others connected people separately by three degree of separation. However, only users separated by two degree of separation are candidates to be suggested as a friend. The algorithm uses the patterns defined by their connections to find those users who have similar behavior as the root user. The recommendation mechanism was developed based on the characterization and analyses of the network formed by the user's friends and friends-of-friends (FOF).", "title": "" }, { "docid": "60971d26877ef62b816526f13bd76c24", "text": "Breast cancer is one of the leading causes of cancer death among women worldwide. In clinical routine, automatic breast ultrasound (BUS) image segmentation is very challenging and essential for cancer diagnosis and treatment planning. Many BUS segmentation approaches have been studied in the last two decades, and have been proved to be effective on private datasets. Currently, the advancement of BUS image segmentation seems to meet its bottleneck. The improvement of the performance is increasingly challenging, and only few new approaches were published in the last several years. It is the time to look at the field by reviewing previous approaches comprehensively and to investigate the future directions. In this paper, we study the basic ideas, theories, pros and cons of the approaches, group them into categories, and extensively review each category in depth by discussing the principles, application issues, and advantages/disadvantages. Keyword: breast ultrasound (BUS) images; breast cancer; segmentation; benchmark; early detection; computer-aided diagnosis (CAD)", "title": "" }, { "docid": "e7597ab3c1ea61571b59170f2d981dc7", "text": "In this paper, a tri-material double-gate (TMDG) tunnel field effect transistor (TFET) is introduced. By using a variable separation technique, analytical full 2D channel potential and electric field models for TMDG TFET is derived from 2D Poisson's equation. The electric field distribution is used to compute the tunneling generation rate and further numerically calculate the tunneling current. The results demonstrate that a smaller local minimum of conduction band at the source side and a larger tunneling barrier at the drain side are formed by the gate work function mismatch. This special band energy profile can significantly boost the on-state performance and suppress the off-state current induced by the ambipolar effect. The data extracted from the developed models are in good accordance with the TCAD simulation results.", "title": "" }, { "docid": "2d05142e12f63a354ec0c48436cd3697", "text": "Author Name Disambiguation Neil R. Smalheiser and Vetle I. Torvik", "title": "" }, { "docid": "28f0b9aeba498777e1f4a946f2bb4e65", "text": "Idiomatic expressions are plentiful in everyday language, yet they remain mysterious, as it is not clear exactly how people learn and understand them. They are of special interest to linguists, psycholinguists, and lexicographers, mainly because of their syntactic and semantic idiosyncrasies as well as their unclear lexical status. Despite a great deal of research on the properties of idioms in the linguistics literature, there is not much agreement on which properties are characteristic of these expressions. Because of their peculiarities, idiomatic expressions have mostly been overlooked by researchers in computational linguistics. In this article, we look into the usefulness of some of the identified linguistic properties of idioms for their automatic recognition. Specifically, we develop statistical measures that each model a specific property of idiomatic expressions by looking at their actual usage patterns in text. We use these statistical measures in a type-based classification task where we automatically separate idiomatic expressions (expressions with a possible idiomatic interpretation) from similar-on-the-surface literal phrases (for which no idiomatic interpretation is possible). In addition, we use some of the measures in a token identification task where we distinguish idiomatic and literal usages of potentially idiomatic expressions in context.", "title": "" }, { "docid": "120e36cc162f4ce602da810c80c18c7d", "text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.", "title": "" }, { "docid": "1eb292b564276c43b203e02219c0de21", "text": "The “cvpaper.challenge” is a group composed of members from AIST, Tokyo Denki Univ. (TDU), and Univ. of Tsukuba that aims to systematically summarize papers on computer vision, pattern recognition, and related fields. For this particular review, we focused on reading the ALL 602 conference papers presented at the CVPR2015, the premier annual computer vision event held in June 2015, in order to grasp the trends in the field. Further, we are proposing “DeepSurvey” as a mechanism embodying the entire process from the reading through all the papers, the generation of ideas, and to the writing of paper.", "title": "" }, { "docid": "cc78d1482412669e05f57e13cbc1c59f", "text": "We present a method to learn and propagate shape placements in 2D polygonal scenes from a few examples provided by a user. The placement of a shape is modeled as an oriented bounding box. Simple geometric relationships between this bounding box and nearby scene polygons define a feature set for the placement. The feature sets of all example placements are then used to learn a probabilistic model over all possible placements and scenes. With this model, we can generate a new set of placements with similar geometric relationships in any given scene. We introduce extensions that enable propagation and generation of shapes in 3D scenes, as well as the application of a learned modeling session to large scenes without additional user interaction. These concepts allow us to generate complex scenes with thousands of objects with relatively little user interaction.", "title": "" } ]
scidocsrr
6de2a1a64f8613c429d8cfc3b6958fa7
Economic Security Metrics
[ { "docid": "b62da3e709d2bd2c7605f3d0463eff2f", "text": "This study examines the economic effect of information security breaches reported in newspapers on publicly traded US corporations. We find limited evidence of an overall negative stock market reaction to public announcements of information security breaches. However, further investigation reveals that the nature of the breach affects this result. We find a highly significant negative market reaction for information security breaches involving unauthorized access to confidential data, but no significant reaction when the breach does not involve confidential information. Thus, stock market participants appear to discriminate across types of breaches when assessing their economic impact on affected firms. These findings are consistent with the argument that the economic consequences of information security breaches vary according to the nature of the underlying assets affected by the breach.", "title": "" }, { "docid": "7c016ac731c9af830826a74662554fdc", "text": "While the literature on information security economics has begun to investigate the stock market impact of security breaches and vulnerability announcements, little more than anecdotal evidence exists on the effects of privacy breaches. In this paper we present the first comprehensive analysis of the impact of a company’s privacy incidents on its market value. We compile a broad data set of instances of exposure of personal information due to failures of some security mechanism (hacking, stolen or lost equipment, poor data handling processes, and others) and we present the results of various empirical analyses, including event study analysis. We show that there exists a negative and statistically significant impact of data breaches on a company’s market value on the announcement day for the breach. The cumulative effect increases in magnitudes over the day following the breach announcement, but then decreases and loses statistical significance. We also present regression analyses that aim at disentangling the eff ects of a number of factors on abnormal stock returns due to reported breaches. Finally, we comment on the differences between the impact of the security breaches already described in the literature, and the privacy breaches described here.", "title": "" } ]
[ { "docid": "2a059577ca2a186c53ac76c6a3eae82d", "text": "• Talk with faculty members who are best acquainted with the field(s) of study that interest you. • Talk with professionals in the occupations you wish to enter about the types of degrees and/or credentials they hold or recommend for entry or advancement in field. • Browse program websites or look through catalogs and graduate school reference books (some of these are available in the CSC Resource Library) and determine prerequisites, length and scope of program, etc. • Narrow down your choices based on realistic assessment of your strengths and the programs, which will meet your needs.", "title": "" }, { "docid": "717605f0fb1a17825b3e851187b85299", "text": "We present a new method for measuring photoplethysmogram signals remotely using ambient light and a digital camera that allows for accurate recovery of the waveform morphology (from a distance of 3 m). In particular, we show that the peak-to-peak time between the systolic peak and diastolic peak/inflection can be automatically recovered using the second-order derivative of the remotely measured waveform. We compare measurements from the face with those captured using a contact fingertip sensor and show high agreement in peak and interval timings. Furthermore, we show that results can be significantly improved using orange, green, and cyan color channels compared to the tradition red, green, and blue channel combination. The absolute error in interbeat intervals was 26 ms and the absolute error in mean systolic-diastolic peak-to-peak times was 12 ms. The mean systolic-diastolic peak-to-peak times measured using the contact sensor and the camera were highly correlated, ρ = 0.94 (p <; 0.001). The results were obtained with a camera frame-rate of only 30 Hz. This technology has significant potential for advancing healthcare.", "title": "" }, { "docid": "8ab08f51d15d6b5ff751334be1896f9f", "text": "This paper analyzes the role that different indices and dimensions of ethnicity play in the process of economic development. Firstly, we discuss the advantages and disadvantages of alternative data sources for the construction of indices of religious and ethnic heterogeneity. Secondly, we compare the index of fractionalization and the index of polarization. We argue that an index of the family of discrete polarization measures is the adequate indicator to measure potential conflict. We find that ethnic (religious) polarization has a large and negative effect on economic development through the reduction of investment and the increase of government consumption and the probability of a civil conflict. D 2004 Published by Elsevier B.V. JEL classification: O11; Z12; O55", "title": "" }, { "docid": "f5076644c68ec6261fab541066ad6df5", "text": "Social media channels, such as Facebook or Twitter, allow for people to express their views and opinions about any public topics. Public sentiment related to future events, such as demonstrations or parades, indicate public attitude and therefore may be applied while trying to estimate the level of disruption and disorder during such events. Consequently, sentiment analysis of social media content may be of interest for different organisations, especially in security and law enforcement sectors. This paper presents a new lexicon-based sentiment analysis algorithm that has been designed with the main focus on real time Twitter content analysis. The algorithm consists of two key components, namely sentiment normalisation and evidence-based combination function, which have been used in order to estimate the intensity of the sentiment rather than positive/negative label and to support the mixed sentiment classification process. Finally, we illustrate a case study examining the relation between negative sentiment of twitter posts related to English Defence League and the level of disorder during the organisation’s related events.", "title": "" }, { "docid": "b492a0063354a81bd99ac3f81c3fb1ec", "text": "— Bangla automatic number plate recognition (ANPR) system using artificial neural network for number plate inscribing in Bangla is presented in this paper. This system splits into three major parts-number plate detection, plate character segmentation and Bangla character recognition. In number plate detection there arises many problems such as vehicle motion, complex background, distance changes etc., for this reason edge analysis method is applied. As Bangla number plate consists of two words and seven characters, detected number plates are segmented into individual words and characters by using horizontal and vertical projection analysis. After that a robust feature extraction method is employed to extract the information from each Bangla words and characters which is non-sensitive to the rotation, scaling and size variations. Finally character recognition system takes this information as an input to recognize Bangla characters and words. The Bangla character recognition is implemented using multilayer feed-forward network. According to the experimental result, (The abstract needs some exact figures of findings (like success rates of recognition) and how much the performance is better than previous one.) the performance of the proposed system on different vehicle images is better in case of severe image conditions.", "title": "" }, { "docid": "bc85e28da375e2a38e06f0332a18aef0", "text": "Background: Statistical reviews of the theories of reasoned action (TRA) and planned behavior (TPB) applied to exercise are limited by methodological issues including insufficient sample size and data to examine some moderator associations. Methods: We conducted a meta-analytic review of 111 TRA/TPB and exercise studies and examined the influences of five moderator variables. Results: We found that: a) exercise was most strongly associated with intention and perceived behavioral control; b) intention was most strongly associated with attitude; and c) intention predicted exercise behavior, and attitude and perceived behavioral control predicted intention. Also, the time interval between intention to behavior; scale correspondence; subject age; operationalization of subjective norm, intention, and perceived behavioral control; and publication status moderated the size of the effect. Conclusions: The TRA/TPB effectively explained exercise intention and behavior and moderators of this relationship. Researchers and practitioners are more equipped to design effective interventions by understanding the TRA/TPB constructs.", "title": "" }, { "docid": "a027c9dd3b4522cdf09a2238bfa4c37e", "text": "Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train them on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high quality word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikipedia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for French, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, showing very strong performance compared to previous models.", "title": "" }, { "docid": "5e64e36e76f4c0577ae3608b6e715a1f", "text": "Deep learning has recently become very popular on account of its incredible success in many complex datadriven applications, including image classification and speech recognition. The database community has worked on data-driven applications for many years, and therefore should be playing a lead role in supporting this new wave. However, databases and deep learning are different in terms of both techniques and applications. In this paper, we discuss research problems at the intersection of the two fields. In particular, we discuss possible improvements for deep learning systems from a database perspective, and analyze database applications that may benefit from deep learning techniques.", "title": "" }, { "docid": "7209596ad58da21211bfe0ceaaccc72b", "text": "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.", "title": "" }, { "docid": "ca1cc40633a97f557b2c97e135534e27", "text": "This paper presents a real-time long-range lane detection and tracking approach to meet the requirements of the high-speed intelligent vehicles running on highway roads. Based on a linear-parabolic two-lane highway road model and a novel strong lane marking feature named Lane Marking Segmentation, the maximal lane detection distance of this approach is up to 120 meters. Then the lane lines are selected and tracked by estimating the ego vehicle lateral offset with a Kalman filter. Experiment results with test dataset extracted from real traffic scenes on highway roads show that the approaches proposed in this paper can achieve a high detection rate with a low time cost.", "title": "" }, { "docid": "8dbbda0de8217ad2a52466a649653996", "text": "we introduce an algorithm that generates primes included in a given interval I = [a, b] , the algorithm is an optimization to the segmented sieve of eratosthenes,it finds primes up to N without any repetition of multiples of primes using the equation pn.pj + 2pn.pj.c = N with c ∈ Z , its time complexity is linear O(nloglog(n)− n(loglog(n))).", "title": "" }, { "docid": "47dc7c546c4f0eb2beb1b251ef9e4a81", "text": "In this paper we describe AMT, a tool for monitoring temporal properties of continuous signals. We first introduce S TL /PSL, a specification formalism based on the industrial standard language P SL and the real-time temporal logic MITL , extended with constructs that allow describing behaviors of real-valued variables. The tool automatically builds property observers from an STL /PSL specification and checks, in an offlineor incrementalfashion, whether simulation traces satisfy the property. The AMT tool is validated through a Fla sh memory case-study.", "title": "" }, { "docid": "63bbab40aa12a3732c80addadb8d8f85", "text": "In their 1988 paper, Rich and Waters define the Programmer’s Apprentice as an “intelligent computer program that functions like a human support team,” providing guidance in requirements, design, and implementation of programs [21]. The Programmer’s Apprentice is differentiated from a simple code auto-completion agent in its ability to understand software design considerations and engage in helpful dialogue with the software engineer [23]. As we considered the problem of the programmer’s apprentice, we came to agree that interfaces are among the most important structural elements of code. In fact, in considering our own coding habits, we realized the most helpful assistant would be one that can comprehend design through the structure of an interface and assist in the completion of the code through a divide-and-conquer approach. In this work, we focus solely on the goal of implementation, proposing a novel framework for code generation that, given a sketch of helper function interfaces, suggests both use of primitive language operations and structuring of programs through decomposition.", "title": "" }, { "docid": "a4d315e5cff107329a603c19177259f1", "text": "Despite the fact that different studies have been performed using transcranial direct current stimulation (tDCS) in aphasia, so far, to what extent the stimulation of a cerebral region may affect the activity of anatomically connected regions remains unclear. The authors used a combination of transcranial magnetic stimulation (TMS) and electroencephalography (EEG) to explore brain areas' excitability modulation before and after active and sham tDCS. Six chronic aphasics underwent 3 weeks of language training coupled with tDCS over the right inferior frontal gyrus. To measure the changes induced by tDCS, TMS-EEG closed to the area stimulated with tDCS were calculated. A significant improvement after tDCS stimulation was found which was accompained by a modification of the EEG over the stimulated region.", "title": "" }, { "docid": "7894b8eae0ceacc92ef2103f0ea8e693", "text": "In this paper, different first and second derivative filters are investigated to find edge map after denoising a corrupted gray scale image. We have proposed a new derivative filter of first order and described a novel approach of edge finding with an aim to find better edge map in a restored gray scale image. Subjective method has been used by visually comparing the performance of the proposed derivative filter with other existing first and second order derivative filters. The root mean square error and root mean square of signal to noise ratio have been used for objective evaluation of the derivative filters. Finally, to validate the efficiency of the filtering schemes different algorithms are proposed and the simulation study has been carried out using MATLAB 5.0.", "title": "" }, { "docid": "43cd3b5ac6e2e2f240f4feb44be65b99", "text": "Executive Overview Toyota’s Production System (TPS) is based on “lean” principles including a focus on the customer, continual improvement and quality through waste reduction, and tightly integrated upstream and downstream processes as part of a lean value chain. Most manufacturing companies have adopted some type of “lean initiative,” and the lean movement recently has gone beyond the shop floor to white-collar offices and is even spreading to service industries. Unfortunately, most of these efforts represent limited, piecemeal approaches—quick fixes to reduce lead time and costs and to increase quality—that almost never create a true learning culture. We outline and illustrate the management principles of TPS that can be applied beyond manufacturing to any technical or service process. It is a true systems approach that effectively integrates people, processes, and technology—one that must be adopted as a continual, comprehensive, and coordinated effort for change and learning across the organization.", "title": "" }, { "docid": "5cf2c4239507b7d66cec3cf8fabf7f60", "text": "Government corruption is more prevalent in poor countries than in rich countries. This paper uses cross-industry heterogeneity in growth rates within Vietnam to test empirically whether growth leads to lower corruption. We find that it does. We begin by developing a model of government officials’ choice of how much bribe money to extract from firms that is based on the notion of inter-regional tax competition, and consider how officials’ choices change as the economy grows. We show that economic growth is predicted to decrease the rate of bribe extraction under plausible assumptions, with the benefit to officials of demanding a given share of revenue as bribes outweighed by the increased risk that firms will move elsewhere. This effect is dampened if firms are less mobile. Our empirical analysis uses survey data collected from over 13,000 Vietnamese firms between 2006 and 2010 and an instrumental variables strategy based on industry growth in other provinces. We find, first, that firm growth indeed causes a decrease in bribe extraction. Second, this pattern is particularly true for firms with strong land rights and those with operations in multiple provinces, consistent with these firms being more mobile. Our results suggest that as poor countries grow, corruption could subside “on its own,” and they demonstrate one type of positive feedback between economic growth and good institutions. ∗Contact information: Bai: jieb@mit.edu; Jayachandran: seema@northwestern.edu; Malesky: ejm5@duke.edu; Olken: bolken@mit.edu. We thank Lori Beaman, Raymond Fisman, Chang-Tai Hsieh, Supreet Kaur, Neil McCulloch, Andrei Shleifer, Matthew Stephenson, Eric Verhoogen, and Ekaterina Zhuravskaya for helpful comments.", "title": "" }, { "docid": "fb1a178c7c097fbbf0921dcef915dc55", "text": "AIMS\nThe management of open lower limb fractures in the United Kingdom has evolved over the last ten years with the introduction of major trauma networks (MTNs), the publication of standards of care and the wide acceptance of a combined orthopaedic and plastic surgical approach to management. The aims of this study were to report recent changes in outcome of open tibial fractures following the implementation of these changes.\n\n\nPATIENTS AND METHODS\nData on all patients with an open tibial fracture presenting to a major trauma centre between 2011 and 2012 were collected prospectively. The treatment and outcomes of the 65 Gustilo Anderson Grade III B tibial fractures were compared with historical data from the same unit.\n\n\nRESULTS\nThe volume of cases, the proportion of patients directly admitted and undergoing first debridement in a major trauma centre all increased. The rate of limb salvage was maintained at 94% and a successful limb reconstruction rate of 98.5% was achieved. The rate of deep bone infection improved to 1.6% (one patient) in the follow-up period.\n\n\nCONCLUSION\nThe reasons for these improvements are multifactorial, but the major trauma network facilitating early presentation to the major trauma centre, senior orthopaedic and plastic surgical involvement at every stage and proactive microbiological management, may be important factors.\n\n\nTAKE HOME MESSAGE\nThis study demonstrates that a systemised trauma network combined with evidence based practice can lead to improvements in patient care.", "title": "" }, { "docid": "7d868685692667f5fa85a7d0e957ae8e", "text": "Non-negative Tensor Factorization (NTF) has become a prominent tool for analyzing high dimensional multi-way structured data. In this paper we set out to analyze gene expression across brain regions in multiple subjects based on data from the Allen Human Brain Atlas [1] with more than 40 % data missing in our problem. Our analysis is based on the non-negativity constrained Canonical Polyadic (CP) decomposition where we handle the missing data using marginalization considering three prominent alternating least squares procedures; multiplicative updates, column-wise, and row-wise updating of the component matrices. We examine three gene expression prediction scenarios based on data missing at random, whole genes missing and whole areas missing within a subject. We find that the column-wise updating approach also known as HALS performs the most efficient when fitting the model. We further observe that the non-negativity constrained CP model is able to predict gene expressions better than predicting by the subject average when data is missing at random. When whole genes and whole areas are missing it is in general better to predict by subject averages. However, we find that when whole genes are missing from all subjects the model based predictions are useful. When analyzing the structure of the components derived for one of the best predicting model orders the components identified in general constitute localized regions of the brain. Non-negative tensor factorization based on marginalization thus forms a promising framework for imputing missing values and characterizing gene expression in the human brain. However, care also has to be taken in particular when predicting the genetic expression levels at a whole region of the brain missing as our analysis indicates that this requires a substantial amount of subjects with data for this region in order for the model predictions to be reliable.", "title": "" }, { "docid": "c68c5df29702e797b758474f4e8b137e", "text": "Abstract—A miniaturized printed log-periodic fractal dipole antenna is proposed. Tree fractal structure is introduced in an antenna design and evolves the traditional Euclidean log-periodic dipole array into the log-periodic second-iteration tree-dipole array (LPT2DA) for the first time. Main parameters and characteristics of the proposed antenna are discussed. A fabricated proof-of-concept prototype of the proposed antenna is etched on a FR4 substrate with a relative permittivity of 4.4 and volume of 490 mm × 245 mm × 1.5 mm. The impedance bandwidth (measured VSWR < 2) of the fabricated antenna with approximate 40% reduction of traditional log-periodic dipole antenna is from 0.37 to 3.55GHz with a ratio of about 9.59 : 1. Both numerical and experimental results show that the proposed antenna has stable directional radiation patterns and apparently miniaturized effect, which are suitable for various ultra-wideband applications.", "title": "" } ]
scidocsrr
57f985f2edec4455a92a4c7b96c7da63
Linked Open Vocabularies (LOV): A gateway to reusable semantic vocabularies on the Web
[ { "docid": "93efc06a282a12fb65038381cf390e19", "text": "Linked Open Data (LOD) comprises an unprecedented volume of structured data on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowdsourced or extracted data of often relatively low quality. We present a methodology for test-driven quality assessment of Linked Data, which is inspired by test-driven software development. We argue that vocabularies, ontologies and knowledge bases should be accompanied by a number of test cases, which help to ensure a basic level of quality. We present a methodology for assessing the quality of linked data resources, based on a formalization of bad smells and data quality problems. Our formalization employs SPARQL query templates, which are instantiated into concrete quality test case queries. Based on an extensive survey, we compile a comprehensive library of data quality test case patterns. We perform automatic test case instantiation based on schema constraints or semi-automatically enriched schemata and allow the user to generate specific test case instantiations that are applicable to a schema or dataset. We provide an extensive evaluation of five LOD datasets, manual test case instantiation for five schemas and automatic test case instantiations for all available schemata registered with Linked Open Vocabularies (LOV). One of the main advantages of our approach is that domain specific semantics can be encoded in the data quality test cases, thus being able to discover data quality problems beyond conventional quality heuristics.", "title": "" }, { "docid": "d931f6f9960e8688c2339a27148efe74", "text": "Most knowledge on the Web is encoded as natural language text, which is convenient for human users but very difficult for software agents to understand. Even with increased use of XML-encoded information, software agents still need to process the tags and literal symbols using application dependent semantics. The Semantic Web offers an approach in which knowledge can be published by and shared among agents using symbols with a well defined, machine-interpretable semantics. The Semantic Web is a “web of data” in that (i) both ontologies and instance data are published in a distributed fashion; (ii) symbols are either ‘literals’ or universally addressable ‘resources’ (URI references) each of which comes with unique semantics; and (iii) information is semi-structured. The Friend-of-a-Friend (FOAF) project (http://www.foafproject.org/) is a good application of the Semantic Web in which users publish their personal profiles by instantiating the foaf:Personclass and adding various properties drawn from any number of ontologies. The Semantic Web’s distributed nature raises significant data access problems – how can an agent discover, index, search and navigate knowledge on the Semantic Web? Swoogle (Dinget al. 2004) was developed to facilitate webscale semantic web data access by providing these services to both human and software agents. It focuses on two levels of knowledge granularity: URI based semantic web vocabulary andsemantic web documents (SWDs), i.e., RDF and OWL documents encoded in XML, NTriples or N3. Figure 1 shows Swoogle’s architecture. The discovery component automatically discovers and revisits SWDs using a set of integrated web crawlers. The digest component computes metadata for SWDs and semantic web terms (SWTs) as well as identifies relations among them, e.g., “an SWD instantiates an SWT class”, and “an SWT class is the domain of an SWT property”. The analysiscomponent uses cached SWDs and their metadata to derive analytical reports, such as classifying ontologies among SWDs and ranking SWDs by their importance. The s rvicecomponent sup-", "title": "" } ]
[ { "docid": "2f5ccd63b8f23300c090cb00b6bbe045", "text": "Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a \"variable,\" the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.", "title": "" }, { "docid": "6b878f3084bd74d963f25b3fd87d0a34", "text": "Cooperative behavior planning for automated vehicles is getting more and more attention in the research community. This paper introduces two dimensions to structure cooperative driving tasks. The authors suggest to distinguish driving tasks by the used communication channels and by the hierarchical level of cooperative skills and abilities. In this manner, this paper presents the cooperative behavior skills of \"Jack\", our automated vehicle driving from Stanford to Las Vegas in January 2015.", "title": "" }, { "docid": "0d0eb6ed5dff220bc46ffbf87f90ee59", "text": "Objectives. The aim of this review was to investigate whether alternating hot–cold water treatment is a legitimate training tool for enhancing athlete recovery. A number of mechanisms are discussed to justify its merits and future research directions are reported. Alternating hot–cold water treatment has been used in the clinical setting to assist in acute sporting injuries and rehabilitation purposes. However, there is overwhelming anecdotal evidence for it’s inclusion as a method for post exercise recovery. Many coaches, athletes and trainers are using alternating hot–cold water treatment as a means for post exercise recovery. Design. A literature search was performed using SportDiscus, Medline and Web of Science using the key words recovery, muscle fatigue, cryotherapy, thermotherapy, hydrotherapy, contrast water immersion and training. Results. The physiologic effects of hot–cold water contrast baths for injury treatment have been well documented, but its physiological rationale for enhancing recovery is less known. Most experimental evidence suggests that hot–cold water immersion helps to reduce injury in the acute stages of injury, through vasodilation and vasoconstriction thereby stimulating blood flow thus reducing swelling. This shunting action of the blood caused by vasodilation and vasoconstriction may be one of the mechanisms to removing metabolites, repairing the exercised muscle and slowing the metabolic process down. Conclusion. To date there are very few studies that have focussed on the effectiveness of hot–cold water immersion for post exercise treatment. More research is needed before conclusions can be drawn on whether alternating hot–cold water immersion improves recuperation and influences the physiological changes that characterises post exercise recovery. q 2003 Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1eba4ab4cb228a476987a5d1b32dda6c", "text": "Optimistic estimates suggest that only 30-70% of waste generated in cities of developing countries is collected for disposal. As a result, uncollected waste is often disposed of into open dumps, along the streets or into water bodies. Quite often, this practice induces environmental degradation and public health risks. Notwithstanding, such practices also make waste materials readily available for itinerant waste pickers. These 'scavengers' as they are called, therefore perceive waste as a resource, for income generation. Literature suggests that Informal Sector Recycling (ISR) activity can bring other benefits such as, economic growth, litter control and resources conservation. This paper critically reviews trends in ISR activities in selected developing and transition countries. ISR often survives in very hostile social and physical environments largely because of negative Government and public attitude. Rather than being stigmatised, the sector should be recognised as an important element for achievement of sustainable waste management in developing countries. One solution to this problem could be the integration of ISR into the formal waste management system. To achieve ISR integration, this paper highlights six crucial aspects from literature: social acceptance, political will, mobilisation of cooperatives, partnerships with private enterprises, management and technical skills, as well as legal protection measures. It is important to note that not every country will have the wherewithal to achieve social inclusion and so the level of integration must be 'flexible'. In addition, the structure of the ISR should not be based on a 'universal' model but should instead take into account local contexts and conditions.", "title": "" }, { "docid": "9aab4a607de019226e9465981b82f9b8", "text": "Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.", "title": "" }, { "docid": "064505e942f5f8fd5f7e2db5359c7fe8", "text": "THE hopping of kangaroos is reminiscent of a bouncing ball or the action of a pogo stick. This suggests a significant storage and recovery of energy in elastic elements. One might surmise that the kangaroo's first hop would require a large amount of energy whereas subsequent hops could rely extensively on elastic rebound. If this were the case, then the kangaroo's unusual saltatory mode of locomotion should be an energetically inexpensive way to move.", "title": "" }, { "docid": "d7de2d835ce5a9f973a41b6f70a41512", "text": "This study addresses generating counterfactual explanations with multimodal information. Our goal is not only to classify a video into a specific category, but also to provide explanations on why it is not predicted as part of a specific class with a combination of visual-linguistic information. Requirements that the expected output should satisfy are referred to as counterfactuality in this paper: (1) Compatibility of visual-linguistic explanations, and (2) Positiveness/negativeness for the specific positive/negative class. Exploiting a spatio-temporal region (tube) and an attribute as visual and linguistic explanations respectively, the explanation model is trained to predict the counterfactuality for possible combinations of multimodal information in a posthoc manner. The optimization problem, which appears during the training/inference process, can be efficiently solved by inserting a novel neural network layer, namely the maximum subpath layer. We demonstrated the effectiveness of this method by comparison with a baseline of the actionrecognition datasets extended for this task. Moreover, we provide information-theoretical insight into the proposed method.", "title": "" }, { "docid": "0da5045988b5064544870e1ff0f7ba44", "text": "Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.", "title": "" }, { "docid": "e48c260c2a0ef52c1aff8d11a3dc071e", "text": "Current transformer (CT) saturation can cause protective relay mal-operation or even prevent tripping. The wave shape of the secondary current is severely distorted as the CT is forced into deep saturation when the residual flux in the core adds to the flux change caused by faults. In this paper, a morphological lifting scheme is proposed to extract features contained in the waveform of the signal. The detection of the CT saturation is accurately achieved and the points of the inflection, where the saturation begins and ends, are found with the scheme used. This paper also presents a compensation algorithm, based upon the detection results, to reconstruct healthy secondary currents. The proposed morphological lifting scheme and compensation algorithm are demonstrated on a sample power system. The simulation results clearly indicate that they can successfully detect and compensate the distorted secondary current of a saturated CT with residual flux.", "title": "" }, { "docid": "63737a22e2591a91884496ea7a1185b1", "text": "Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time- and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPrime, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPrime benchmarks. They retain the original applications' performance characteristics, in particular their relative performance across platforms. Also, the result benchmarks, already released online, are much more compact and easy-to-port compared to the original applications.", "title": "" }, { "docid": "55e587291229b8c9889a95f99d68d88b", "text": "Power system loads are one of the crucial elements of modern power systems and, as such, must be properly modelled in stability studies. However, the static and dynamic characteristics of a load are commonly unknown, extremely nonlinear, and are usually time varying. Consequently, a measurement-based approach for determining the load characteristics would offer a significant advantage since it could update the parameters of load models directly from the available system measurements. For this purpose and in order to accurately determine load model parameters, a suitable parameter estimation method must be applied. The conventional approach to this problem favors the use of standard nonlinear estimators or artificial intelligence (AI)-based methods. In this paper, a new solution for determining the unknown load model parameters is proposed-an improved particle swarm optimization (IPSO) method. The proposed method is an AI-type technique similar to the commonly used genetic algorithms (GAs) and is shown to provide a promising alternative. This paper presents a performance comparison of IPSO and GA using computer simulations and measured data obtained from realistic laboratory experiments.", "title": "" }, { "docid": "9ee83c40df6b97eaf502628af1434376", "text": "Many object detection systems are constrained by the time required to convolve a target image with a bank of filters that code for different aspects of an object's appearance, such as the presence of component parts. We exploit locality-sensitive hashing to replace the dot-product kernel operator in the convolution with a fixed number of hash-table probes that effectively sample all of the filter responses in time independent of the size of the filter bank. To show the effectiveness of the technique, we apply it to evaluate 100,000 deformable-part models requiring over a million (part) filters on multiple scales of a target image in less than 20 seconds using a single multi-core processor with 20GB of RAM. This represents a speed-up of approximately 20,000 times - four orders of magnitude - when compared with performing the convolutions explicitly on the same hardware. While mean average precision over the full set of 100,000 object classes is around 0.16 due in large part to the challenges in gathering training data and collecting ground truth for so many classes, we achieve a mAP of at least 0.20 on a third of the classes and 0.30 or better on about 20% of the classes.", "title": "" }, { "docid": "07b362c7f6e941513cfbafce1ba87db1", "text": "ResearchGate is increasingly used by scholars to upload the full-text of their articles and make them freely available for everyone. This study aims to investigate the extent to which ResearchGate members as authors of journal articles comply with publishers’ copyright policies when they self-archive full-text of their articles on ResearchGate. A random sample of 500 English journal articles available as full-text on ResearchGate were investigated. 108 articles (21.6%) were open access (OA) published in OA journals or hybrid journals. Of the remaining 392 articles, 61 (15.6%) were preprint, 24 (6.1%) were post-print and 307 (78.3%) were published (publisher) PDF. The key finding was that 201 (51.3%) out of 392 non-OA articles infringed the copyright and were non-compliant with publishers’ policy. While 88.3% of journals allowed some form of self-archiving (SHERPA/RoMEO green, blue or yellow journals), the majority of non-compliant cases (97.5%) occurred when authors self-archived publishers’ PDF files (final published version). This indicates that authors infringe copyright most of the time not because they are not allowed to self-archive, but because they use the wrong version, which might imply their lack of understanding of copyright policies and/or complexity and diversity of policies.", "title": "" }, { "docid": "9d3ca4966c26c6691398157a22531a1d", "text": "Bipedal locomotion skills are challenging to develop. Control strategies often use local linearization of the dynamics in conjunction with reduced-order abstractions to yield tractable solutions. In these model-based control strategies, the controller is often not fully aware of many details, including torque limits, joint limits, and other non-linearities that are necessarily excluded from the control computations for simplicity. Deep reinforcement learning (DRL) offers a promising model-free approach for controlling bipedal locomotion which can more fully exploit the dynamics. However, current results in the machine learning literature are often based on ad-hoc simulation models that are not based on corresponding hardware. Thus it remains unclear how well DRL will succeed on realizable bipedal robots. In this paper, we demonstrate the effectiveness of DRL using a realistic model of Cassie, a bipedal robot. By formulating a feedback control problem as finding the optimal policy for a Markov Decision Process, we are able to learn robust walking controllers that imitate a reference motion with DRL. Controllers for different walking speeds are learned by imitating simple time-scaled versions of the original reference motion. Controller robustness is demonstrated through several challenging tests, including sensory delay, walking blindly on irregular terrain and unexpected pushes at the pelvis. We also show we can interpolate between individual policies and that robustness can be improved with an interpolated policy.", "title": "" }, { "docid": "360595010d855ee6b1db3b25687fe428", "text": "Multiple-input multiple-output (MIMO) is an existing technique that can significantly increase throughput of the system by employing multiple antennas at the transmitter and the receiver. Realizing maximum benefit from this technique requires computationally intensive detectors which poses significant challenges to receiver design. Furthermore, a flexible detector or multiple detectors are needed to handle different configurations. Graphical Processor Unit (GPU), a highly parallel commodity programmable co-processor, can deliver extremely high computation throughput and is well suited for signal processing applications. However, careful architecture aware design is needed to leverage performance offered by GPU. We show we can achieve good performance while maintaining flexibility by employing an optimized trellis-based MIMO detector on GPU.", "title": "" }, { "docid": "3d911d6eeefefd16f898200da0e1a3ef", "text": "We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.", "title": "" }, { "docid": "fe2b4a3d33799f868015c6a7319aad64", "text": "The development of Open Source Software (OSS) is fundamentally different from the proprietary software. In the OSS development scenario a single developer or group of developers writes the source code for the first version of the software and make it freely available over the internet. Then other developers are invited to contribute to the existing code for its next release. Making the source code of the software available on the internet allows developers around the world to contribute code, add new functionality, improvement of the existing source code and submitting bug fixes to the current release. In such a software development scenario the maintenance of the open source software is a perpetual task. Developing an OSS system implies a series of frequent maintenance efforts for debugging existing functionality and adding new functionality to the software system. The process of making the modifications to software systems after their first release is known as maintenance process. The term maintainability is closely related to the software maintenance because maintainability means the easiness to perform maintenance of the system. The most widely used software metric to quantify the maintainability is known as Maintainability Index (MI). In this study the MI of four most popular OSS namely Apache, Mozilla Firefox, MySql and FileZilla for fifty successive releases was empirically investigated. The MI in terms of software metrics namely Lines of Code (LOC), Cyclomatic Complexity (CC), and Halstead Volume (V) was computed for all the fifty successive versions of four OSS. The software metrics were calculated using Resource Standard Metrics (RSM) tool and Crystal Flow tool. It was observed from the results that the MI value was the highest in case of Mozilla Firefox and was the lowest in the case of Apache OSS.", "title": "" }, { "docid": "6bdf0850725f091fea6bcdf7961e27d0", "text": "The aim of this review is to document the advantages of exclusive breastfeeding along with concerns which may hinder the practice of breastfeeding and focuses on the appropriateness of complementary feeding and feeding difficulties which infants encounter. Breastfeeding, as recommended by the World Health Organisation, is the most cost effective way for reducing childhood morbidity such as obesity, hypertension and gastroenteritis as well as mortality. There are several factors that either promote or act as barriers to good infant nutrition. Factors which influence breastfeeding practice in terms of initiation, exclusivity and duration are namely breast engorgement, sore nipples, milk insufficiency and availability of various infant formulas. On the other hand, introduction of complementary foods, also known as weaning, is done around 4 to 6 months and mothers usually should start with home-made nutritious food. Difficulties encountered during the weaning process are often refusal to eat followed by vomiting, colic, allergic reactions and diarrhoea. key words: Exclusive breastfeeding, Weaning, Complementary feeding, Feeding difficulties.", "title": "" }, { "docid": "2af519e703849aede1c022567fce8ca0", "text": "Evaluation is a central digital library practice. It provides important data for managing digital libraries and informing strategic decision-making. Digital library evaluation and management are organizational as well as technical practices. What evaluation models can account for these organizational factors, in practice as well as in theory? To address these questions, this paper integrates two models, one from the organizational literature (Porter’s value chain), and one from the evaluation literature (evaluation logic models), into a generic, flexible and extensible evaluation model that supports the goal-oriented evaluation and management of digital libraries in specific sociotechnical contexts. A case study is provided.", "title": "" }, { "docid": "be7f0079a3462e9cf81d44002b8a340e", "text": "Long-term participation in creative activities has benefits for middle-aged and older people that may improve their adaptation to later life. We first investigated the factor structure of the Creative Benefits Scale and then used it to construct a model to help explain the connection between generativity and life satisfaction in adults who participated in creative hobbies. Participants included 546 adults between the ages of 40 and 88 (Mean = 58.30 years) who completed measures of life satisfaction, generativity, and the Creative Benefits Scale with its factors of Identity, Calming, Spirituality, and Recognition. Structural equation modeling was used to examine the connection of age with life satisfaction in older adults and to explore the effects of creativity on this relation. The proposed model of life satisfaction, incorporating age, creativity, and generativity, fit the data well, indicating that creativity may help explain the link between the generativity and life satisfaction.", "title": "" } ]
scidocsrr
3e2696a4ada0f4504bd61b5cf2a83fbc
Micro-opinion Sentiment Intensity Analysis and Summarization in Online Videos
[ { "docid": "6081f8b819133d40522a4698d4212dfc", "text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.", "title": "" } ]
[ { "docid": "4f025d54b632ad65d4219225ac0f06cc", "text": "Real-world AI systems have been recently deployed which can automatically analyze the plan and tactics of tennis players. As the game-state is updated regularly at short intervals (i.e. point-level), a library of successful and unsuccessful plans of a player can be learnt over time. Given the relative strengths and weaknesses of a player’s plans, a set of proven plans or tactics from the library that characterize a player can be identified. For low-scoring, continuous team sports like soccer, such analysis for multi-agent teams does not exist as the game is not segmented into “discretized” plays (i.e. plans), making it difficult to obtain a library that characterizes a team’s behavior. Additionally, as player tracking data is costly and difficult to obtain, we only have partial team tracings in the form of ball actions which makes this problem even more difficult. In this paper, we propose a method to overcome these issues by representing team behavior via play-segments, which are spatio-temporal descriptions of ball movement over fixed windows of time. Using these representations we can characterize team behavior from entropy maps, which give a measure of predictability of team behaviors across the field. We show the efficacy and applicability of our method on the 2010-2011 English Premier League soccer data.", "title": "" }, { "docid": "3dcb93232121be1ff8a2d96ecb25bbdd", "text": "We describe the approach that won the preliminary phase of the German traffic sign recognition benchmark with a better-than-human recognition rate of 98.98%.We obtain an even better recognition rate of 99.15% by further training the nets. Our fast, fully parameterizable GPU implementation of a Convolutional Neural Network does not require careful design of pre-wired feature extractors, which are rather learned in a supervised way. A CNN/MLP committee further boosts recognition performance.", "title": "" }, { "docid": "fdfcab6236d74bcc882fde104f457d83", "text": "In this study, direct and indirect effects of self-esteem, daily internet use and social media addiction to depression levels of adolescents have been investigated by testing a model. This descriptive study was conducted with 1130 students aged between 12 and 18 who are enrolled at different schools in southern region of Aegean. In order to collect data, “Children's Depression Inventory”, “Rosenberg Self-esteem Scale” and “Social Media Addiction Scale” have been used. In order to test the hypotheses Pearson's correlation and structural equation modeling were performed. The findings revealed that self-esteem and social media addiction predict %20 of the daily internet use. Furthermore, while depression was associated with self-esteem and daily internet use directly, social media addiction was affecting depression indirectly. Tested model was able to predict %28 of the depression among adolescents.", "title": "" }, { "docid": "53a1d344a6e38dd790e58c6952e51cdb", "text": "The thermal conductivities of individual single crystalline intrinsic Si nanowires with diameters of 22, 37, 56, and 115 nm were measured using a microfabricated suspended device over a temperature range of 20–320 K. Although the nanowires had well-defined crystalline order, the thermal conductivity observed was more than two orders of magnitude lower than the bulk value. The strong diameter dependence of thermal conductivity in nanowires was ascribed to the increased phonon-boundary scattering and possible phonon spectrum modification. © 2003 American Institute of Physics.@DOI: 10.1063/1.1616981 #", "title": "" }, { "docid": "188ab32548b91fd1bf1edf34ff3d39d9", "text": "With the marvelous development of wireless techniques and ubiquitous deployment of wireless systems indoors, myriad indoor location-based services (ILBSs) have permeated into numerous aspects of modern life. The most fundamental functionality is to pinpoint the location of the target via wireless devices. According to how wireless devices interact with the target, wireless indoor localization schemes roughly fall into two categories: device based and device free. In device-based localization, a wireless device (e.g., a smartphone) is attached to the target and computes its location through cooperation with other deployed wireless devices. In device-free localization, the target carries no wireless devices, while the wireless infrastructure deployed in the environment determines the target’s location by analyzing its impact on wireless signals.\n This article is intended to offer a comprehensive state-of-the-art survey on wireless indoor localization from the device perspective. In this survey, we review the recent advances in both modes by elaborating on the underlying wireless modalities, basic localization principles, and data fusion techniques, with special emphasis on emerging trends in (1) leveraging smartphones to integrate wireless and sensor capabilities and extend to the social context for device-based localization, and (2) extracting specific wireless features to trigger novel human-centric device-free localization. We comprehensively compare each scheme in terms of accuracy, cost, scalability, and energy efficiency. Furthermore, we take a first look at intrinsic technical challenges in both categories and identify several open research issues associated with these new challenges.", "title": "" }, { "docid": "6cdb73baa43c26ce0184fdfb270b124f", "text": "Most video surveillance suspect investigation systems rely on the videos taken in different camera views. Actually, besides the videos, in the investigation process, investigators also manually label some marks, which, albeit incomplete, can be quite accurate and helpful in identifying persons. This paper studies the problem of Person Re-identification with Incomplete Marks (PRIM), aiming at ranking the persons in the gallery according to both the videos and incomplete marks. This problem is solved by a multi-step fusion algorithm, which consists of three key steps: (i) The early fusing step exploits both visual features and marked attributes to predict a complete and precise attribute vector. (ii) Based on the statistical attribute d ominance and saliency phenomena, a dominance-saliency matching model is suggested for measuring the distance between attribute vectors. (iii) The gallery is ranked separately by using visual features and attribute vectors, and the overall ranking list is the result of a late fusion. Experiments conducted on VIPeR dataset have validated the effectiveness of the proposed method in all the three key steps. The results also show that through introducing marks, the retrieval accuracy is significantly improved.", "title": "" }, { "docid": "1e42000ed8a108c8745403102613373b", "text": "Knowledge graph embedding aims to represent entities and relations in a large-scale knowledge graph as elements in a continuous vector space. Existing methods, e.g., TransE and TransH, learn embedding representation by defining a global margin-based loss function over the data. However, the optimal loss function is determined during experiments whose parameters are examined among a closed set of candidates. Moreover, embeddings over two knowledge graphs with different entities and relations share the same set of candidate loss functions, ignoring the locality of both graphs. This leads to the limited performance of embedding related applications. In this paper, we propose a locally adaptive translation method for knowledge graph embedding, called TransA, to find the optimal loss function by adaptively determining its margin over different knowledge graphs. Experiments on two benchmark data sets demonstrate the superiority of the proposed method, as compared to the-state-of-the-art ones.", "title": "" }, { "docid": "c1f12b1718c3844efe6e87563abcf6e6", "text": "Conversational interfaces-computer interfaces that use text or voice for human-computer interaction-are one of many interaction modalities in interactive systems. Their use has expanded with the growth and range of products that are both digital and physical in nature. But users' expectations of conversational experiences are not being met. In addition, expectations may not be met in different ways across people. This paper aims to provoke dialogue among design researchers and practitioners regarding the design of an interaction modality that is commonly found in human-to-human interaction. It focuses on the design of conversational interfaces such as virtual assistants to illustrate the quandary. Finally, the paper proposes a phenomenon for the issue-a type of dissonance-and introduces tactics to reduce dissonance. It sheds light on potential approaches to design as well as complications that may occur.", "title": "" }, { "docid": "919ef49d1bbd4d76fc7c3ca01969c811", "text": "While most research papers on computer architectures include some performance measurements, these performance numbers tend to be distrusted. Up to the point that, after so many research articles on data cache architectures, for instance, few researchers have a clear view of what are the best data cache mechanisms. To illustrate the usefulness of a fair quantitative comparison, we have picked a target architecture component for which lots of optimizations have been proposed (data caches), and we have implemented most of the performance-oriented hardware data cache optimizations published in top conferences in the past 4 years. Beyond the comparison of data cache ideas, our goals are twofold: (1) to clearly and quantitatively evaluate the effect of methodology shortcomings, such as model precision, benchmark selection, trace selection..., on assessing and comparing research ideas, and to outline how strong is the methodology effect in many cases, (2) to outline that the lack of interoperable simulators and not disclosing simulators at publication time make it difficult if not impossible to fairly assess the benefit of research ideas. This study is part of a broader effort, called MicroLib, an open library of modular simulators aimed at promoting the disclosure and sharing of simulator models.", "title": "" }, { "docid": "966d650d8d186715dd1ee08effedce92", "text": "Over the past few years, various tasks involving videos such as classification, description, summarization and question answering have received a lot of attention. Current models for these tasks compute an encoding of the video by treating it as a sequence of images and going over every image in the sequence, which becomes computationally expensive for longer videos. In this paper, we focus on the task of video classification and aim to reduce the computational cost by using the idea of distillation. Specifically, we propose a Teacher-Student network wherein the teacher looks at all the frames in the video but the student looks at only a small fraction of the frames in the video. The idea is to then train the student to minimize (i) the difference between the final representation computed by the student and the teacher and/or (ii) the difference between the distributions predicted by the teacher and the student. This smaller student network which involves fewer computations but still learns to mimic the teacher can then be employed at inference time for video classification. We experiment with the YouTube-8M dataset and show that the proposed student network can reduce the inference time by upto 30% with a negligent drop in the performance.", "title": "" }, { "docid": "f320e7f092040e72de062dc8203bbcfb", "text": "This research provides a security assessment of the Android framework-Google's software stack for mobile devices. The authors identify high-risk threats to the framework and suggest several security solutions for mitigating them.", "title": "" }, { "docid": "cf48a139219a096a5e75e5462ed492d1", "text": "Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actorcritic systems. However, compared to singleobjective optimization, game dynamics are more complex and less understood. In this paper, we analyze gradient-based methods with momentum on simple games. We prove that alternating updates are more stable than simultaneous updates. Next, we show both theoretically and empirically that alternating gradient updates with a negative momentum term achieves convergence in a difficult toy adversarial problem, but also on the notoriously difficult to train saturating GANs.", "title": "" }, { "docid": "ed444fdb732c1a85113de5b00375f1c2", "text": "2590 Suppose one person, call him Sender, wishes to persuade another, call her Receiver, to change her action. If Receiver is a rational Bayesian, can Sender persuade her to take an action he would prefer over the action she was originally going to take? If Receiver understands that Sender chose what information to convey with the intent of manipulating her action for his own benefit, can Sender still gain from persuasion? If so, what is the optimal way to persuade? These questions are of substantial economic importance. As Donald McCloskey and Arjo Klamer (1995) emphasize, attempts at persuasion command a sizable share of our resources. Persuasion, as we will define it below, plays an important role in advertising, courts, lobbying, financial disclosure, and political campaigns, among many other economic activities. Consider the example of a prosecutor trying to convince a judge that a defendant is guilty. When the defendant is indeed guilty, revealing the facts of the case will tend to help the prosecutor’s case. When the defendant is innocent, revealing facts will tend to hurt the prosecutor’s case. Can the prosecutor structure his arguments, selection of evidence, etc. so as to increase the probability of conviction by a rational judge on average? Perhaps surprisingly, the answer to this question is yes. Bayes’s Law restricts the expectation of posterior beliefs but puts no other constraints on Bayesian Persuasion", "title": "" }, { "docid": "e9eb2f80d41f177cc43933c19fefc6ca", "text": "A novel dual-band microstrip monopolar patch antenna with zeroth-order resonance is proposed and analyzed in this letter. The antenna is a combination of circular patch antenna with center-fed and circularly periodic mushroom units. With the zeroth-order resonant (ZOR) mode and TM02 mode, the proposed antenna can create horizontal magnetic loop currents on the patch, which lead to a low profile of 0.02λ at the low frequency band. The radiation mechanism of the proposed antenna is discussed. The antenna is fabricated on a double-layered printed circuit board (PCB) and center-fed by a 50- Ω SMA connector. Simulated and measured results show that the proposed antenna produces a stable monopole-like radiation pattern in two working frequency bands. The corresponding impedance bandwidths and gains are 0.75% and 5.1 dBi for the low frequency band and 20% and a range of 5.8-8 dBi for the high frequency band.", "title": "" }, { "docid": "c589dd4a3da018fbc62d69e2d7f56e88", "text": "More than 520 soil samples were surveyed for species of the mycoparasitic zygomycete genus Syncephalis using a culture-based approach. These fungi are relatively common in soil using the optimal conditions for growing both the host and parasite. Five species obtained in dual culture are unknown to science and are described here: (i) S. digitata with sporangiophores short, merosporangia separate at the apices, simple, 3-5 spored; (ii) S. floridana, which forms galls in the host and has sporangiophores up to 170 µm long with unbranched merosporangia that contain 2-4 spores; (iii) S. pseudoplumigaleta, with an abrupt apical bend in the sporophore; (iv) S. pyriformis with fertile vesicles that are long-pyriform; and (v) S. unispora with unispored merosporangia. To facilitate future molecular comparisons between species of Syncephalis and to allow identification of these fungi from environmental sampling datasets, we used Syncephalis-specific PCR primers to generate internal transcribed spacer (ITS) sequences for all five new species.", "title": "" }, { "docid": "3d2e82a0353d0b2803a579c413403338", "text": "In 1994, nutritional facts panels became mandatory for processed foods to improve consumer access to nutritional information and to promote healthy food choices. Recent applied work is reviewed here in terms of how consumers value and respond to nutritional labels. We first summarize the health and nutritional links found in the literature and frame this discussion in terms of the obesity policy debate. Second, we discuss several approaches that have been used to empirically investigate consumer responses to nutritional labels: (a) surveys, (b) nonexperimental approaches utilizing revealed preferences, and (c) experimentbased approaches. We conclude with a discussion and suggest avenues of future research. INTRODUCTION How the provision of nutritional information affects consumers’ food choices and whether consumers value nutritional information are particularly pertinent questions in a country where obesity is pervasive. Firms typically have more information about the quality of their products than do consumers, creating a situation of asymmetric information. It is prohibitively costly for most consumers to acquire nutritional information independently of firms. Firms can use this Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 1 of 30 information to signal their quality and to receive quality premiums. However, firms that sell less nutritious products prefer to omit nutritional information. In this market setting, firms may not have an incentive to fully reveal their product quality, may try to highlight certain attributes in their advertising claims while shrouding others (Gabaix & Laibson 2006), or may provide information in a less salient fashion (Chetty et al. 2007). Mandatory nutritional labeling can fill this void of information provision by correcting asymmetric information and transforming an experience-good or a credence-good characteristic into search-good characteristics (Caswell & Mojduszka 1996). Golan et al. (2000) argue that the effectiveness of food labeling depends on firms’ incentives for information provision, government information requirements, and the role of third-party entities in standardizing and certifying the accuracy of the information. Yet nutritional information is valuable only if consumers use it in some fashion. Early advances in consumer choice theory, such as market goods possessing desirable characteristics (Lancaster 1966) or market goods used in conjunction with time to produce desirable commodities (Becker 1965), set the theoretical foundation for studying how market prices, household characteristics, incomes, nutrient content, and taste considerations interact with and influence consumer choice. LaFrance (1983) develops a theoretical framework and estimates the marginal value of nutrient versus taste parameters in an analytical approach that imposes a sufficient degree of restrictions to generality to be empirically feasible. Real or perceived tradeoffs between nutritional and taste or pleasure considerations imply that consumers will not necessarily make healthier choices. Reduced search costs mean that consumers can more easily make choices that maximize their utility. Foster & Just (1989) provide a framework in which to analyze the effect of information on consumer choice and welfare in this context. They argue that Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 2 of 30 when consumers are uncertain about product quality, the provision of information can help to better align choices with consumer preferences. However, consumers may not use nutritional labels because consumers still require time and effort to process the information. Reading a nutritional facts panel (NFP), for instance, necessitates that the consumer remove the product from the shelf and turn the product to read the nutritional information on the back or side. In addition, consumers often have difficulty evaluating the information provided on the NFP or how to relate it to a healthy diet. Berning et al. (2008) present a simple model of demand for nutritional information. The consumer chooses to consume goods and information to maximize utility subject to budget and time constraints, which include time to acquire and to process nutritional information. Consumers who have strong preferences for nutritional content will acquire more nutritional information. Alternatively, other consumers may derive more utility from appearance or taste. Following Becker & Murphy (1993), Berning et al. show that nutritional information may act as a complement to the consumption of products with unknown nutritional quality, similar to the way advertisements complement advertised goods. From a policy perspective, the rise in the U.S. obesity rate coupled with the asymmetry of information have resulted in changes in the regulatory environment. The U.S. Food and Drug Administration (FDA) is currently considering a change to the format and content of nutritional labels, originally implemented in 1994 to promote increased label use. Consumers’ general understanding of the link between food consumption and health, and widespread interest in the provision of nutritional information on food labels, is documented in the existing literature (e.g., Williams 2005, Grunert & Wills 2007). Yet only approximately half Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 3 of 30 of consumers claim to use NFPs when making food purchasing decisions (Blitstein & Evans 2006). Moreover, self-reported consumer use of nutritional labels has declined from 1995 to 2006, with the largest decline for younger age groups (20–29 years) and less educated consumers (Todd & Variyam 2008). This decline supports research findings that consumers prefer for short front label claims over the NFP’s lengthy back label explanations (e.g., Levy & Fein 1998, Wansink et al. 2004, Williams 2005, Grunert & Wills 2007). Furthermore, regulatory rules and enforcement policies may have induced firms to move away from reinforcing nutritional claims through advertising (e.g., Ippolito & Pappalardo 2002). Finally, critical media coverage of regulatory challenges (e.g., Nestle 2000) may have contributed to decreased labeling usage over time. Excellent review papers on this topic preceded and inspired this present review (e.g., Baltas 2001, Williams 2005, Drichoutis et al. 2006). In particular, Drichoutis et al. (2006) reviews the nutritional labeling literature and addresses specific issues regarding the determinants of label use, the debate on mandatory labeling, label formats preferred by consumers, and the effect of nutritional label use on purchase and dietary behavior. The current review article updates and complements these earlier reviews by focusing on recent work and highlighting major contributions in applied analyses on how consumers value, utilize, and respond to nutritional labels. We first cover the health and nutritional aspects of consumer food choices found in the literature to frame the discussion on nutritional labels in the context of the recent debate on obesity prevention policies. Second, we discuss the different empirical approaches that are utilized to investigate consumers’ response to and valuation of nutritional labels, classifying existing work into three categories according to the empirical strategy and data sources. First, we present findings based on consumer surveys and stated consumer responses to Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 4 of 30 labels. The second set of articles reviewed utilizes nonexperimental data and focuses on estimating consumer valuation of labels on the basis of revealed preferences. Here, the empirical strategy is structural, using hedonic methods, structural demand analyses, or discrete choice models and allowing for estimation of consumers’ willingness to pay (WTP) for nutritional information. The last set of empirical contributions discussed is based on experimental data, differentiating market-level and natural experiments from laboratory evidence. These studies employ mainly reduced-form approaches. Finally, we conclude with a discussion of avenues for future research. CONSUMER FOOD DEMAND, NUTRITIONAL LABELS, AND OBESITY PREVENTION The U.S. Department of Health and Public Services declared the reduction of obesity rates to less than 15% to be one of the national health objectives for 2010, yet in 2009 no state met these targets, with only two states reporting obesity rates less than 20% (CDC 2010). Researchers have studied and identified many contributing factors, such as the decreasing relative price of caloriedense food (Chou et al. 2004) and marketing practices that took advantage of behavioral reactions to food (Smith 2004). Other researchers argue that an increased prevalence of fast food (Cutler et al. 2003) and increased portion sizes in restaurants and at home (Wansink & van Ittersum 2007) may be the driving factors of increased food consumption. In addition, food psychologists have focused on changes in the eating environment, pointing to distractions such as television, books, conversation with others, or preoccupation with work as leading to increased food intake (Wansink 2004). Although each of these factors potentially contributes to the obesity epidemic, they do not necessarily mean that consumers wi", "title": "" }, { "docid": "bb5977b6bb06aa7bdcb7f0a74adf3271", "text": "Aspect-level sentiment classification aims to identify the sentiment expressed towards some aspects given context sentences. In this paper, we introduce an attention-over-attention (AOA) neural network for aspect level sentiment classification. Our approach models aspects and sentences in a joint way and explicitly captures the interaction between aspects and context sentences. With the AOA module, our model jointly learns the representations for aspects and sentences, and automatically focuses on the important parts in sentences. Our experiments on laptop and restaurant datasets demonstrate our approach outperforms previous LSTM-based architectures.", "title": "" }, { "docid": "f107ba1eef32a7d1c7b4c6f56470f05e", "text": "Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis. We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software's strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy). qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on high-performance computing systems via coupling of workflow management systems. Integration of project and data management as well as workflow resources in one place present clear advantages over existing solutions.", "title": "" }, { "docid": "4b4ff17023cf54fe552697ef83c83926", "text": "Artificial intelligence has been an active branch of research for computer scientists and psychologists for 50 years. The concept of mimicking human intelligence in a computer fuels the public imagination and has led to countless academic papers, news articles and fictional works. However, public expectations remain largely unfulfilled, owing to the incredible complexity of everyday human behavior. A wide range of tools and techniques have emerged from the field of artificial intelligence, many of which are reviewed here. They include rules, frames, model-based reasoning, case-based reasoning, Bayesian updating, fuzzy logic, multiagent systems, swarm intelligence, genetic algorithms, neural networks, and hybrids such as blackboard systems. These are all ingenious, practical, and useful in various contexts. Some approaches are pre-specified and structured, while others specify only low-level behavior, leaving the intelligence to emerge through complex interactions. Some approaches are based on the use of knowledge expressed in words and symbols, whereas others use only mathematical and numerical constructions. It is proposed that there exists a spectrum of intelligent behaviors from low-level reactive systems through to high-level systems that encapsulate specialist expertise. Separate branches of research have made strides at both ends of the spectrum, but difficulties remain in devising a system that spans the full spectrum of intelligent behavior, including the difficult areas in the middle that include common sense and perception. Artificial intelligence is increasingly appearing in situated systems that interact with their physical environment. As these systems become more compact they are likely to become embedded into everyday equipment. As the 50th anniversary approaches of the Dartmouth conference where the term ‘artificial intelligence’ was first published, it is concluded that the field is in good shape and has delivered some great results. Yet human thought processes are incredibly complex, and mimicking them convincingly remains an elusive challenge. ADVANCES IN COMPUTERS, VOL. 65 1 Copyright © 2005 Elsevier Inc. ISSN: 0065-2458/DOI 10.1016/S0065-2458(05)65001-2 All rights reserved.", "title": "" }, { "docid": "bf9d96d5bf19d51a89c64f83291c4d55", "text": "Retouching can significantly elevate the visual appeal of photos, but many casual photographers lack the expertise to do this well. To address this problem, previous works have proposed automatic retouching systems based on supervised learning from paired training images acquired before and after manual editing. As it is difficult for users to acquire paired images that reflect their retouching preferences, we present in this article a deep learning approach that is instead trained on unpaired data, namely, a set of photographs that exhibits a retouching style the user likes, which is much easier to collect. Our system is formulated using deep convolutional neural networks that learn to apply different retouching operations on an input image. Network training with respect to various types of edits is enabled by modeling these retouching operations in a unified manner as resolution-independent differentiable filters. To apply the filters in a proper sequence and with suitable parameters, we employ a deep reinforcement learning approach that learns to make decisions on what action to take next, given the current state of the image. In contrast to many deep learning systems, ours provides users with an understandable solution in the form of conventional retouching edits rather than just a “black-box” result. Through quantitative comparisons and user studies, we show that this technique generates retouching results consistent with the provided photo set.", "title": "" } ]
scidocsrr
f74d810878292ab625f67c5e2ffaeabc
Eye Gaze Correction with a Single Webcam Based on Eye-Replacement
[ { "docid": "b29947243b1ad21b0529a6dd8ef3c529", "text": "We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border.", "title": "" } ]
[ { "docid": "757441e95be19ca4569c519fb35adfb7", "text": "Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.", "title": "" }, { "docid": "ca072e97f8a5486347040aeaa7909d60", "text": "Camera-based stereo-vision provides cost-efficient vision capabilities for robotic systems. The objective of this paper is to examine the performance of stereo-vision as means to enable a robotic inspection cell for haptic quality testing with the ability to detect relevant information related to the inspection task. This information comprises the location and 3D representation of a complex object under inspection as well as the location and type of quality features which are subject to the inspection task. Among the challenges is the low-distinctiveness of features in neighboring area, inconsistent lighting, similar colors as well as low intra-class variances impeding the retrieval of quality characteristics. The paper presents the general outline of the vision chain as well as performance analysis of various algorithms for relevant steps in the machine vision chain thus indicating the capabilities and drawbacks of a camera-based stereo-vision for flexible use in complex machine vision tasks.", "title": "" }, { "docid": "dfbe5a92d45d4081910b868d78a904d0", "text": "Actuation is essential for artificial machines to interact with their surrounding environment and to accomplish the functions for which they are designed. Over the past few decades, there has been considerable progress in developing new actuation technologies. However, controlled motion still represents a considerable bottleneck for many applications and hampers the development of advanced robots, especially at small length scales. Nature has solved this problem using molecular motors that, through living cells, are assembled into multiscale ensembles with integrated control systems. These systems can scale force production from piconewtons up to kilonewtons. By leveraging the performance of living cells and tissues and directly interfacing them with artificial components, it should be possible to exploit the intricacy and metabolic efficiency of biological actuation within artificial machines. We provide a survey of important advances in this biohybrid actuation paradigm.", "title": "" }, { "docid": "a999bf3da879dde7fc2acb8794861daf", "text": "Most OECD Member countries have sought to renew their systems and structures of public management in the last 10-15 years. Some started earlier than others and the emphasis will vary among Member countries according to their historic traditions and institutions. There is no single best model of public management, but what stands out most clearly is the extent to which countries have pursued and are pursuing broadly common approaches to public management reform. This is most probably because countries have been responding to essentially similar pressures to reform.", "title": "" }, { "docid": "fc70a1820f838664b8b51b5adbb6b0db", "text": "This paper presents a method for identifying an opinion with its holder and topic, given a sentence from online news media texts. We introduce an approach of exploiting the semantic structure of a sentence, anchored to an opinion bearing verb or adjective. This method uses semantic role labeling as an intermediate step to label an opinion holder and topic using data from FrameNet. We decompose our task into three phases: identifying an opinion-bearing word, labeling semantic roles related to the word in the sentence, and then finding the holder and the topic of the opinion word among the labeled semantic roles. For a broader coverage, we also employ a clustering technique to predict the most probable frame for a word which is not defined in FrameNet. Our experimental results show that our system performs significantly better than the baseline.", "title": "" }, { "docid": "03ddd583496d561d6e5389b97db61916", "text": "A spatial outlier is a spatially referenced object whose non-spatial attribute values are significantly different from the values of its neighborhood. Identification of spatial outliers can lead to the discovery of unexpected, interesting, and useful spatial patterns for further analysis. One drawback of existing methods is that normal objects tend to be falsely detected as spatial outliers when their neighborhood contains true spatial outliers. In this paper, we propose a suite of spatial outlier detection algorithms to overcome this disadvantage. We formulate the spatial outlier detection problem in a general way and design algorithms which can accurately detect spatial outliers. In addition, using a real-world census data set, we demonstrate that our approaches can not only avoid detecting false spatial outliers but also find true spatial outliers ignored by existing methods.", "title": "" }, { "docid": "89013222fccc85c1321020153b8a416b", "text": "The objective of this paper is to summarize the work that has been developed by the authors for the last several years, in order to demonstrate that the Theory of Characteristic Modes can be used to perform a systematic design of different types of antennas. Characteristic modes are real current modes that can be computed numerically for conducting bodies of arbitrary shape. Since characteristic modes form a set of orthogonal functions, they can be used to expand the total current on the surface of the body. However, this paper shows that what makes characteristic modes really attractive for antenna design is the physical insight they bring into the radiating phenomena taking place in the antenna. The resonance frequency of modes, as well as their radiating behavior, can be determined from the information provided by the eigenvalues associated with the characteristic modes. Moreover, by studying the current distribution of modes, an optimum feeding arrangement can be found in order to obtain the desired radiating behavior.", "title": "" }, { "docid": "b59d49106614382cf97f276529d1ddd1", "text": "core microarchitecture B. Sinharoy J. A. Van Norstrand R. J. Eickemeyer H. Q. Le J. Leenstra D. Q. Nguyen B. Konigsburg K. Ward M. D. Brown J. E. Moreira D. Levitan S. Tung D. Hrusecky J. W. Bishop M. Gschwind M. Boersma M. Kroener M. Kaltenbach T. Karkhanis K. M. Fernsler The POWER8i processor is the latest RISC (Reduced Instruction Set Computer) microprocessor from IBM. It is fabricated using the company’s 22-nm Silicon on Insulator (SOI) technology with 15 layers of metal, and it has been designed to significantly improve both single-thread performance and single-core throughput over its predecessor, the POWER7A processor. The rate of increase in processor frequency enabled by new silicon technology advancements has decreased dramatically in recent generations, as compared to the historic trend. This has caused many processor designs in the industry to show very little improvement in either single-thread or single-core performance, and, instead, larger numbers of cores are primarily pursued in each generation. Going against this industry trend, the POWER8 processor relies on a much improved core and nest microarchitecture to achieve approximately one-and-a-half times the single-thread performance and twice the single-core throughput of the POWER7 processor in several commercial applications. Combined with a 50% increase in the number of cores (from 8 in the POWER7 processor to 12 in the POWER8 processor), the result is a processor that leads the industry in performance for enterprise workloads. This paper describes the core microarchitecture innovations made in the POWER8 processor that resulted in these significant performance benefits.", "title": "" }, { "docid": "ae6d36ccbf79ae6f62af3a62ef3e3bb2", "text": "This paper presents a new neural network system called the Evolving Tree. This network resembles the Self-Organizing map, but deviates from it in several aspects, which are desirable in many analysis tasks. First of all the Evolving Tree grows automatically, so the user does not have to decide the network’s size before training. Secondly the network has a hierarchical structure, which makes network training and use computationally very efficient. Test results with both synthetic and actual data show that the Evolving Tree works quite well.", "title": "" }, { "docid": "fd543534d6a9cf10abb2f073cec41fdb", "text": "Article history: Available online 26 October 2012 We present an O ( √ n log n)-approximation algorithm for the problem of finding the sparsest spanner of a given directed graph G on n vertices. A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph. More precisely, given a graph G = (V , E) with nonnegative edge lengths d : E → R 0 and a stretch k 1, a subgraph H = (V , E H ) is a k-spanner of G if for every edge (s, t) ∈ E , the graph H contains a path from s to t of length at most k · d(s, t). The previous best approximation ratio was Õ (n2/3), due to Dinitz and Krauthgamer (STOC ’11). We also improve the approximation ratio for the important special case of directed 3-spanners with unit edge lengths from Õ ( √ n ) to O (n1/3 log n). The best previously known algorithms for this problem are due to Berman, Raskhodnikova and Ruan (FSTTCS ’10) and Dinitz and Krauthgamer. The approximation ratio of our algorithm almost matches Dinitz and Krauthgamer’s lower bound for the integrality gap of a natural linear programming relaxation. Our algorithm directly implies an O (n1/3 log n)-approximation for the 3-spanner problem on undirected graphs with unit lengths. An easy O ( √ n )-approximation algorithm for this problem has been the best known for decades. Finally, we consider the Directed Steiner Forest problem: given a directed graph with edge costs and a collection of ordered vertex pairs, find a minimum-cost subgraph that contains a path between every prescribed pair. We obtain an approximation ratio of O (n2/3+ ) for any constant > 0, which improves the O (n · min(n4/5,m2/3)) ratio due to Feldman, Kortsarz and Nutov (JCSS’12). © 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "337c0573c1d60c141e925e182fdd1cb8", "text": "Autonomous systems, often realized as multi-agent systems, are envisioned to deal with uncertain and dynamic environments. They are applied in dangerous situations, e.g. as rescue robots or to relieve humans from complex and tedious tasks like driving a car or infrastructure maintenance. But in order to further improve the technology a generic measurement and benchmarking of autonomy is required. Within this paper we present an improved understanding of autonomous systems. Based on this foundation we introduce our concept of a multi-dimensional autonomy metric framework that especially takes into account multisystem environments. Finally, our approach is illustrated by means of an example.", "title": "" }, { "docid": "587f1510411636090bc192b1b9219b58", "text": "Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.", "title": "" }, { "docid": "b4a8541c2870ea3d91819c0c0de68ad3", "text": "The paper will describe various types of security issues which include confidentality, integrity and availability of data. There exists various threats to security issues traffic analysis, snooping, spoofing, denial of service attack etc. The asymmetric key encryption techniques may provide a higher level of security but compared to the symmetric key encryption Although we have existing techniques symmetric and assymetric key cryptography methods but there exists security concerns. A brief description of proposed framework is defined which uses the random combination of public and private keys. The mechanisms includes: Integrity, Availability, Authentication, Nonrepudiation, Confidentiality and Access control which is achieved by private-private key model as the user is restricted both at sender and reciever end which is restricted in other models. A review of all these systems is described in this paper.", "title": "" }, { "docid": "e74d1eb4f1d5c45989aff2cb0e79a83e", "text": "Environmental audio tagging is a newly proposed task to predict the presence or absence of a specific audio event in a chunk. Deep neural network (DNN) based methods have been successfully adopted for predicting the audio tags in the domestic audio scene. In this paper, we propose to use a convolutional neural network (CNN) to extract robust features from mel-filter banks (MFBs), spectrograms or even raw waveforms for audio tagging. Gated recurrent unit (GRU) based recurrent neural networks (RNNs) are then cascaded to model the long-term temporal structure of the audio signal. To complement the input information, an auxiliary CNN is designed to learn on the spatial features of stereo recordings. We evaluate our proposed methods on Task 4 (audio tagging) of the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016) challenge. Compared with our recent DNN-based method, the proposed structure can reduce the equal error rate (EER) from 0.13 to 0.11 on the development set. The spatial features can further reduce the EER to 0.10. The performance of the end-to-end learning on raw waveforms is also comparable. Finally, on the evaluation set, we get the state-of-the-art performance with 0.12 EER while the performance of the best existing system is 0.15 EER.", "title": "" }, { "docid": "046c9eaa6fc9a6516982477e1a02f6d0", "text": "Imperfections in healthcare revenue cycle management systems cause discrepancies between submitted claims and received payments. This paper presents a method for deriving attributional rules that can be used to support the preparation and screening of claims prior to their submission to payers. The method starts with unsupervised analysis of past payments to determine normal levels of payments for services. Then, supervised machine learning is used to derive sets of attributional rules for predicting potential discrepancies in claims. New claims can be then classified using the created models. The method was tested on a subset of Obstetrics claims for payment submitted by one hospital to Medicaid. One year of data was used to create models, which were tested using the following year's data. Results indicate that rule-based models are able to detect abnormal claims prior to their submission.", "title": "" }, { "docid": "03c03dcdc15028417e699649291a2317", "text": "The unique characteristics of origami to realize 3-D shape from 2-D patterns have been fascinating many researchers and engineers. This paper presents a fabrication of origami patterned fabric wheels that can deform and change the radius of the wheels. PVC segments are enclosed in the fabrics to build a tough and foldable structure. A special cable driven mechanism was designed to allow the wheels to deform while rotating. A mobile robot with two origami wheels has been built and tested to show that it can deform its wheels to overcome various obstacles.", "title": "" }, { "docid": "c767a9b6808b4556c6f55dd406f8eb0d", "text": "BACKGROUND\nInterest in mindfulness has increased exponentially, particularly in the fields of psychology and medicine. The trait or state of mindfulness is significantly related to several indicators of psychological health, and mindfulness-based therapies are effective at preventing and treating many chronic diseases. Interest in mobile applications for health promotion and disease self-management is also growing. Despite the explosion of interest, research on both the design and potential uses of mindfulness-based mobile applications (MBMAs) is scarce.\n\n\nOBJECTIVE\nOur main objective was to study the features and functionalities of current MBMAs and compare them to current evidence-based literature in the health and clinical setting.\n\n\nMETHODS\nWe searched online vendor markets, scientific journal databases, and grey literature related to MBMAs. We included mobile applications that featured a mindfulness-based component related to training or daily practice of mindfulness techniques. We excluded opinion-based articles from the literature.\n\n\nRESULTS\nThe literature search resulted in 11 eligible matches, two of which completely met our selection criteria-a pilot study designed to evaluate the feasibility of a MBMA to train the practice of \"walking meditation,\" and an exploratory study of an application consisting of mood reporting scales and mindfulness-based mobile therapies. The online market search eventually analyzed 50 available MBMAs. Of these, 8% (4/50) did not work, thus we only gathered information about language, downloads, or prices. The most common operating system was Android. Of the analyzed apps, 30% (15/50) have both a free and paid version. MBMAs were devoted to daily meditation practice (27/46, 59%), mindfulness training (6/46, 13%), assessments or tests (5/46, 11%), attention focus (4/46, 9%), and mixed objectives (4/46, 9%). We found 108 different resources, of which the most used were reminders, alarms, or bells (21/108, 19.4%), statistics tools (17/108, 15.7%), audio tracks (15/108, 13.9%), and educational texts (11/108, 10.2%). Daily, weekly, monthly statistics, or reports were provided by 37% (17/46) of the apps. 28% (13/46) of them permitted access to a social network. No information about sensors was available. The analyzed applications seemed not to use any external sensor. English was the only language of 78% (39/50) of the apps, and only 8% (4/50) provided information in Spanish. 20% (9/46) of the apps have interfaces that are difficult to use. No specific apps exist for professionals or, at least, for both profiles (users and professionals). We did not find any evaluations of health outcomes resulting from the use of MBMAs.\n\n\nCONCLUSIONS\nWhile a wide selection of MBMAs seem to be available to interested people, this study still shows an almost complete lack of evidence supporting the usefulness of those applications. We found no randomized clinical trials evaluating the impact of these applications on mindfulness training or health indicators, and the potential for mobile mindfulness applications remains largely unexplored.", "title": "" }, { "docid": "af82ea560b98535f3726be82a2d23536", "text": "Influence Maximization is an extensively-studied problem that targets at selecting a set of initial seed nodes in the Online Social Networks (OSNs) to spread the influence as widely as possible. However, it remains an open challenge to design fast and accurate algorithms to find solutions in large-scale OSNs. Prior Monte-Carlo-simulation-based methods are slow and not scalable, while other heuristic algorithms do not have any theoretical guarantee and they have been shown to produce poor solutions for quite some cases. In this paper, we propose hop-based algorithms that can easily scale to millions of nodes and billions of edges. Unlike previous heuristics, our proposed hop-based approaches can provide certain theoretical guarantees. Experimental evaluations with real OSN datasets demonstrate the efficiency and effectiveness of our algorithms.", "title": "" }, { "docid": "6259b792713367345374d437f37abdb0", "text": "SWOT analysis (Strength, Weakness, Opportunity, and Threat) has been in use since the 1960s as a tool to assist strategic planning in various types of enterprises including those in the construction industry. Whilst still widely used, the approach has called for improvements to make it more helpful in strategic management. The project described in this paper aimed to study whether the process to convert a SWOT analysis into a strategic plan could be assisted with some simple rationally quantitative model, as an augmented SWOT analysis. By utilizing the mathematical approaches including the quantifying techniques, the “Maximum Subarray” method, and fuzzy mathematics, one or more Heuristic Rules are derived from a SWOT analysis. These Heuristic Rules bring into focus the most influential factors concerning a strategic planning situation, and thus inform strategic analysts where particular consideration should be given. A case study conducted in collaboration with a Chinese international construction company showed that the new SWOT approach is more helpful to strategic planners. The paper provides an augmented SWOT analysis approach for strategists to conduct strategic planning in the construction industry. It also contributes fresh insights into strategic planning by introducing rationally analytic processes to improve the SWOT analysis.", "title": "" }, { "docid": "415f6ca35f6ea8a9f2db938c61cf74f6", "text": "Camptothecin (CPT) belongs to a group of monoterpenoidindole alkaloids (TIAs) and its derivatives such as irinothecan and topothecan have been widely used worldwide for the treatment of cancer, giving rise to rapidly increasing market demands. Genes from Catharanthus roseus encoding strictosidine synthase (STR) and geraniol 10-hydroxylase (G10H), were separately and simultaneously introduced into Ophiorrhiza pumila hairy roots. Overexpression of individual G10H (G lines) significantly improved CPT production with respect to non-transgenic hairy root cultures (NC line) and single STR overexpressing lines (S lines), indicating that G10H plays a more important role in stimulating CPT accumulation than STR in O. pumila. Furthermore, co-overexpression of G10H and STR genes (SG Lines) caused a 56% increase on the yields of CPT compared to NC line and single gene transgenic lines, showed that simultaneous introduction of G10H and STR can produce a synergistic effect on CPT biosynthesis in O. pumila. The MTT assay results indicated that CPT extracted from different lines showed similar anti-tumor activity, suggesting that transgenic O. pumila hairy root lines could be an alternative approach to obtain CPT. To our knowledge, this is the first report on the enhancement of CPT production in O. pumila employing a metabolic engineering strategy.", "title": "" } ]
scidocsrr
4d28282061d4728376604e8cf11fc0d4
A 7-Layered E-Government Framework Consolidating Technical, Social and Managerial Aspects
[ { "docid": "6bc712821f8dcf74d2662abb2452cf64", "text": "We find that the presence of village Internet facilities, offering government to citizen services, is positively associated with the rate at which the villagers obtain some of these services. In a study of a rural Internet project in India, we identify a positive correlation for two such Internet services: obtaining birth certificates for children and applications for old age pensions. Both these government services are of considerable social and economic value to the citizens. Villagers report that the Internet based services saved them time, money, and effort compared with obtaining the services directly from the government office. We also find that these services can reduce corruption in the delivery of these services. After over one year of successful operation, however, the e-government program was not able to maintain the necessary level of local political and administrative support to remain institutionally viable. As government officers shifted from the region, or grew to find the program a threat, the e-government services faltered. We argue that this failure was due to a variety of Critical Failure Factors. We end with a simple sustainability failure model. In summary, we propose that the e-government program failed to be politically and institutionally sustainable due to people, management, cultural, and structural factors.", "title": "" }, { "docid": "44e4797655292e97651924115fd8d711", "text": "Information and communication technology has the capability to improve the process by which governments involve citizens in formulating public policy and public projects. Even though much of government regulations may now be in digital form (and often available online), due to their complexity and diversity, identifying the ones relevant to a particular context is a non-trivial task. Similarly, with the advent of a number of electronic online forums, social networking sites and blogs, the opportunity of gathering citizens’ petitions and stakeholders’ views on government policy and proposals has increased greatly, but the volume and the complexity of analyzing unstructured data makes this difficult. On the other hand, text mining has come a long way from simple keyword search, and matured into a discipline capable of dealing with much more complex tasks. In this paper we discuss how text-mining techniques can help in retrieval of information and relationships from textual data sources, thereby assisting policy makers in discovering associations between policies and citizens’ opinions expressed in electronic public forums and blogs etc. We also present here, an integrated text mining based architecture for e-governance decision support along with a discussion on the Indian scenario.", "title": "" } ]
[ { "docid": "d310779b1006f90719a0ece3cf2583b2", "text": "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model’s decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.", "title": "" }, { "docid": "fc78dbac8fc4d27072521513e4387641", "text": "The flight of insects has fascinated physicists and biologists for more than a century. Yet, until recently, researchers were unable to rigorously quantify the complex wing motions of flapping insects or measure the forces and flows around their wings. However, recent developments in high-speed videography and tools for computational and mechanical modeling have allowed researchers to make rapid progress in advancing our understanding of insect flight. These mechanical and computational fluid dynamic models, combined with modern flow visualization techniques, have revealed that the fluid dynamic phenomena underlying flapping flight are different from those of non-flapping, 2-D wings on which most previous models were based. In particular, even at high angles of attack, a prominent leading edge vortex remains stably attached on the insect wing and does not shed into an unsteady wake, as would be expected from non-flapping 2-D wings. Its presence greatly enhances the forces generated by the wing, thus enabling insects to hover or maneuver. In addition, flight forces are further enhanced by other mechanisms acting during changes in angle of attack, especially at stroke reversal, the mutual interaction of the two wings at dorsal stroke reversal or wing-wake interactions following stroke reversal. This progress has enabled the development of simple analytical and empirical models that allow us to calculate the instantaneous forces on flapping insect wings more accurately than was previously possible. It also promises to foster new and exciting multi-disciplinary collaborations between physicists who seek to explain the phenomenology, biologists who seek to understand its relevance to insect physiology and evolution, and engineers who are inspired to build micro-robotic insects using these principles. This review covers the basic physical principles underlying flapping flight in insects, results of recent experiments concerning the aerodynamics of insect flight, as well as the different approaches used to model these phenomena.", "title": "" }, { "docid": "174bce522f96f0206fb3aae6613cf821", "text": "Fake news and alternative facts have dominated the news cycle of late. In this paper, we present a prototype system that uses social argumentation to verify the validity of proposed alternative facts and help in the detection of fake news. We utilize fundamental argumentation ideas in a graph-theoretic framework that also incorporates semantic web and linked data principles. The argumentation structure is crowdsourced and mediated by expert moderators in a virtual community.", "title": "" }, { "docid": "7b6d2d261675aa83f53c4e3c5523a81b", "text": "(IV) Intravenous therapy is one of the most commonly performed procedures in hospitalized patients yet phlebitis affects 27% to 70% of all patients receiving IV therapy. The incidence of phlebitis has proved to be a menace in effective care of surgical patients, delaying their recovery and increasing duration of hospital stay and cost. The recommendations for reducing its incidence and severity have been varied and of questionable efficacy. The current study was undertaken to evaluate whether elective change of IV cannula at fixed intervals can have any impact on incidence or severity of phlebitis in surgical patients. All patients admitted to the Department of Surgery, SMIMS undergoing IV cannula insertion, fulfilling the selection criteria and willing to participate in the study, were segregated into two random groups prospectively: Group A wherein cannula was changed electively after 24 hours into a fresh vein preferably on the other upper limb and Group B wherein IV cannula was changed only on development of phlebitis or leak i.e. need-based change. The material/brand and protocol for insertion of IV cannula were standardised for all patients, including skin preparation, insertion, fixation and removal. After cannulation, assessment was made after 6 hours, 12 hours and every 24 hours thereafter at all venepuncture sites. VIP and VAS scales were used to record phlebitis and pain respectively. Upon analysis, though there was a lower VIP score in group A compared to group B (0.89 vs. 1.32), this difference was not statistically significant (p-value = 0.277). Furthermore, the differences in pain, as assessed by VAS, at the site of puncture and along the vein were statistically insignificant (p-value > 0.05). Our results are in contradiction to few other studies which recommend a policy of routine change of cannula. Further we advocate a close and thorough monitoring of the venepuncture site and the length of vein immediately distal to the puncture site, as well as a meticulous standardized protocol for IV access.", "title": "" }, { "docid": "4417f505ed279689afa0bde104b3d472", "text": "A single-cavity dual-mode substrate integrated waveguide (SIW) bandpass filter (BPF) for X-band application is presented in this paper. Coplanar waveguide (CPW) is used as SIW-microstrip transition in this design. Two slots of the CPW with unequal lengths are used to excite two degenerate modes, i.e. TE102 and TE201. A slot line is etched on the ground plane of the SIW cavity for perturbation. Its size and position are related to the effect of mode-split, namely the coupling between the two degenerate modes. Due to the cancellation of the two modes, a transmission zero in the lower stopband of the BPF is achieved, which improves the selectivity of the proposed BPF. And the location of the transmission zero can be controlled by adjusting the position and the size of the slot line perturbation properly. By introducing source-load coupling, an additional transmission zero is produced in the upper stopband of the BPF, it enhances the stopband performance of the BPF. Influences of the slot line perturbation on the BPF have been studied. A dual-mode BPF for X-band application has been designed, fabricated and measured. A good agreement between simulation and measurement verifies the validity of this design methodology.", "title": "" }, { "docid": "58451466f0867d20366c7c51be7446e0", "text": "Hybrid intelligence systems combine machine and human intelligence to overcome the shortcomings of existing AI systems. This paper reviews recent research efforts towards developing hybrid systems focusing on reasoning methods for optimizing access to human intelligence and on gaining comprehensive understanding of humans as helpers of AI systems. It concludes by discussing short and long term research directions.", "title": "" }, { "docid": "3e727d70f141f52fb9c432afa3747ceb", "text": "In this paper, we propose an improvement of Adversarial Transformation Networks(ATN) [1]to generate adversarial examples, which can fool white-box models and blackbox models with a state of the art performance and won the SECOND place in the non-target task in CAAD 2018. In this section, we first introduce the whole architecture about our method, then we present our improvement on loss functions to generate adversarial examples satisfying the L∞ norm restriction in the non-targeted attack problem. Then we illustrate how to use a robust-enhance module to make our adversarial examples more robust and have better transfer-ability. At last we will show our method on how to attack an ensemble of models.", "title": "" }, { "docid": "87fefee3cb35d188ad942ee7c8fad95f", "text": "Financial frictions are a central element of most of the models that the literature on emerging markets crises has proposed for explaining the ‘Sudden Stop’ phenomenon. To date, few studies have aimed to examine the quantitative implications of these models and to integrate them with an equilibrium business cycle framework for emerging economies. This paper surveys these studies viewing them as ability-to-pay and willingness-to-pay variations of a framework that adds occasionally binding borrowing constraints to the small open economy real-business-cycle model. A common feature of the different models is that agents factor in the risk of future Sudden Stops in their optimal plans, so that equilibrium allocations and prices are distorted even when credit constraints do not bind. Sudden Stops are a property of the unique, flexible-price competitive equilibrium of these models that occurs in a particular region of the state space in which negative shocks make borrowing constraints bind. The resulting nonlinear effects imply that solving the models requires non-linear numerical methods, which are described in the survey. The results show that the models can yield relatively infrequent Sudden Stops with large current account reversals and deep recessions nested within smoother business cycles. Still, research in this area is at an early stage and this survey aims to stimulate further work. Cristina Arellano Enrique G. Mendoza Department of Economics Department of Economics Social Sciences Building University of Maryland Duke University College Park, MD 20742 Durham, NC 27708-0097 and NBER mendozae@econ.duke.edu", "title": "" }, { "docid": "b32286014bb7105e62fba85a9aab9019", "text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.", "title": "" }, { "docid": "ec492f3ca84546c84a9ee8e1992b1baf", "text": "Sketch is an important media for human to communicate ideas, which reflects the superiority of human intelligence. Studies on sketch can be roughly summarized into recognition and generation. Existing models on image recognition failed to obtain satisfying performance on sketch classification. But for sketch generation, a recent study proposed a sequence-to-sequence variational-auto-encoder (VAE) model called sketch-rnn which was able to generate sketches based on human inputs. The model achieved amazing results when asked to learn one category of object, such as an animal or a vehicle. However, the performance dropped when multiple categories were fed into the model. Here, we proposed a model called sketch-pix2seq which could learn and draw multiple categories of sketches. Two modifications were made to improve the sketch-rnn model: one is to replace the bidirectional recurrent neural network (BRNN) encoder with a convolutional neural network(CNN); the other is to remove the Kullback-Leibler divergence from the objective function of VAE. Experimental results showed that models with CNN encoders outperformed those with RNN encoders in generating human-style sketches. Visualization of the latent space illustrated that the removal of KL-divergence made the encoder learn a posterior of latent space that reflected the features of different categories. Moreover, the combination of CNN encoder and removal of KL-divergence, i.e., the sketchpix2seq model, had better performance in learning and generating sketches of multiple categories and showed promising results in creativity tasks.", "title": "" }, { "docid": "fb7b31b83a0d79a054bab155dfaae79e", "text": "An otherwise-healthy 13-year-old girl with previously normal nails developed longitudinal pigmented bands on multiple fingernails. Physical examination revealed faintly pigmented bands on multiple fingernails and on the left fifth toenail. We believed that the cause of the pigmented bands was onychophagia-induced longitudinal melanonychia, a rare phenomenon, which emphasizes the need for dermatologists to question patients with melanonychia about their nail biting habits because they may not be forthcoming with this information.", "title": "" }, { "docid": "9cae19b4d3b4a8258b1013a9895a6c91", "text": "Research has mainly neglected to examine if the possible antagonism of play/games and seriousness affects the educational potential of serious gaming. This article follows a microsociological approach and treats play and seriousness as different social frames, with each being indicated by significant symbols and containing unique social rules, adequate behavior and typical consequences of action. It is assumed that due to the specific qualities of these frames, serious frames are perceived as more credible but less entertaining than playful frames – regardless of subject matter. Two empirical studies were conducted to test these hypotheses. Results partially confirm expectations, but effects are not as strong as assumed and sometimes seem to be moderated by further variables, such as gender and attitudes. Overall, this article demonstrates that the educational potential of serious gaming depends not only on media design, but also on social context and personal variables.", "title": "" }, { "docid": "89a04e656c8e42a78363a5087771b58d", "text": "Analyzing the security of Wearable Internet-of-Things (WIoT) devices is considered a complex task due to their heterogeneous nature. In addition, there is currently no mechanism that performs security testing for WIoT devices in different contexts. In this article, we propose an innovative security testbed framework targeted at wearable devices, where a set of security tests are conducted, and a dynamic analysis is performed by realistically simulating environmental conditions in which WIoT devices operate. The architectural design of the proposed testbed and a proof-of-concept, demonstrating a preliminary analysis and the detection of context-based attacks executed by smartwatch devices, are presented.", "title": "" }, { "docid": "307c8b04c447757f1bbcc5bf9976f423", "text": "BACKGROUND\nChemical and biomedical Named Entity Recognition (NER) is an essential prerequisite task before effective text mining can begin for biochemical-text data. Exploiting unlabeled text data to leverage system performance has been an active and challenging research topic in text mining due to the recent growth in the amount of biomedical literature. We present a semi-supervised learning method that efficiently exploits unlabeled data in order to incorporate domain knowledge into a named entity recognition model and to leverage system performance. The proposed method includes Natural Language Processing (NLP) tasks for text preprocessing, learning word representation features from a large amount of text data for feature extraction, and conditional random fields for token classification. Other than the free text in the domain, the proposed method does not rely on any lexicon nor any dictionary in order to keep the system applicable to other NER tasks in bio-text data.\n\n\nRESULTS\nWe extended BANNER, a biomedical NER system, with the proposed method. This yields an integrated system that can be applied to chemical and drug NER or biomedical NER. We call our branch of the BANNER system BANNER-CHEMDNER, which is scalable over millions of documents, processing about 530 documents per minute, is configurable via XML, and can be plugged into other systems by using the BANNER Unstructured Information Management Architecture (UIMA) interface. BANNER-CHEMDNER achieved an 85.68% and an 86.47% F-measure on the testing sets of CHEMDNER Chemical Entity Mention (CEM) and Chemical Document Indexing (CDI) subtasks, respectively, and achieved an 87.04% F-measure on the official testing set of the BioCreative II gene mention task, showing remarkable performance in both chemical and biomedical NER. BANNER-CHEMDNER system is available at: https://bitbucket.org/tsendeemts/banner-chemdner.", "title": "" }, { "docid": "5959967e93e5cf63733f60d8db26105e", "text": "Many aspects of our physical environment are hidden. For example, it is hard to estimate how heavy an object is from visual observation alone. In this paper we examine how people actively \"experiment\" within the physical world to discover such latent properties. In the first part of the paper, we develop a novel framework for the quantitative analysis of the information produced by physical interactions. We then describe two experiments that present participants with moving objects in \"microworlds\" that operate according to continuous spatiotemporal dynamics similar to everyday physics (i.e., forces of gravity, friction, etc.). Participants were asked to interact with objects in the microworlds in order to identify their masses, or the forces of attraction/repulsion that governed their movement. Using our modeling framework, we find that learners who freely interacted with the physical system selectively produced evidence that revealed the physical property consistent with their inquiry goal. As a result, their inferences were more accurate than for passive observers and, in some contexts, for yoked participants who watched video replays of an active learner's interactions. We characterize active learners' actions into a range of micro-experiment strategies and discuss how these might be learned or generalized from past experience. The technical contribution of this work is the development of a novel analytic framework and methodology for the study of interactively learning about the physical world. Its empirical contribution is the demonstration of sophisticated goal directed human active learning in a naturalistic context.", "title": "" }, { "docid": "843e7bfe22d8b93852374dde8715ca42", "text": "In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples. To this end, we propose to learn a group of capsule subspaces onto which an input feature vector is projected. Then the lengths of resultant capsules are used to score the probability of belonging to different classes. We train such a Capsule Projection Network (CapProNet) by learning an orthogonal projection matrix for each capsule subspace, and show that each capsule subspace is updated until it contains input feature vectors corresponding to the associated class. We will also show that the capsule projection can be viewed as normalizing the multiple columns of the weight matrix simultaneously to form an orthogonal basis, which makes it more effective in incorporating novel components of input features to update capsule representations. In other words, the capsule projection can be viewed as a multi-dimensional weight normalization in capsule subspaces, where the conventional weight normalization is simply a special case of the capsule projection onto 1D lines. Only a small negligible computing overhead is incurred to train the network in low-dimensional capsule subspaces or through an alternative hyper-power iteration to estimate the normalization matrix. Experiment results on image datasets show the presented model can greatly improve the performance of the state-of-the-art ResNet backbones by 10− 20% and that of the Densenet by 5− 7% respectively at the same level of computing and memory expenses. The CapProNet establishes the competitive state-of-the-art performance for the family of capsule nets by significantly reducing test errors on the benchmark datasets.", "title": "" }, { "docid": "5b57eb0b695a1c85d77db01e94904fb1", "text": "Depth map super-resolution is an emerging topic due to the increasing needs and applications using RGB-D sensors. Together with the color image, the corresponding range data provides additional information and makes visual analysis tasks more tractable. However, since the depth maps captured by such sensors are typically with limited resolution, it is preferable to enhance its resolution for improved recognition. In this paper, we present a novel joint trilateral filtering (JTF) algorithm for solving depth map super-resolution (SR) problems. Inspired by bilateral filtering, our JTF utilizes and preserves edge information from the associated high-resolution (HR) image by taking spatial and range information of local pixels. Our proposed further integrates local gradient information of the depth map when synthesizing its HR output, which alleviates textural artifacts like edge discontinuities. Quantitative and qualitative experimental results demonstrate the effectiveness and robustness of our approach over prior depth map upsampling works.", "title": "" }, { "docid": "eba80f219c9be3690f15a2c6eb6c52ce", "text": "While the incipient internet was largely text-based, the modern digital world is becoming increasingly multi-modal. Here, we examine multi-modal classification where one modality is discrete, e.g. text, and the other is continuous, e.g. visual representations transferred from a convolutional neural network. In particular, we focus on scenarios where we have to be able to classify large quantities of data quickly. We investigate various methods for performing multi-modal fusion and analyze their trade-offs in terms of classification accuracy and computational efficiency. Our findings indicate that the inclusion of continuous information improves performance over text-only on a range of multi-modal classification tasks, even with simple fusion methods. In addition, we experiment with discretizing the continuous features in order to speed up and simplify the fusion process even further. Our results show that fusion with discretized features outperforms text-only classification, at a fraction of the computational cost of full multimodal fusion, with the additional benefit of improved interpretability. Text classification is one of the core problems in machine learning and natural language processing (Borko and Bernick 1963; Sebastiani 2002). It plays a crucial role in important tasks ranging from document retrieval and categorization to sentiment and topic classification (Deerwester et al. 1990; Joachims 1998; Pang and Lee 2008). However, while the incipient Web was largely text-based, the recent decade has seen a surge in multi-modal content: billions of images and videos are posted and shared online every single day. That is, text is either replaced as the dominant modality, as is the case with Instagram posts or YouTube videos, or it is augmented with non-textual content, as with most of today’s web pages. This makes multi-modal classification an important problem. Here, we examine the task of multi-modal classification using neural networks. We are primarily interested in two questions: what is the best way to combine (i.e., fuse) data from different modalities, and how can we do so in the most efficient manner? We examine various efficient multi-modal fusion methods and investigate ways to speed up the fusion process. In particular, we explore discretizing the continuous features, which leads to much faster training and requires Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. less storage, yet is still able to benefit from the inclusion of multi-modal information. To the best of our knowledge, this work constitutes the first attempt to examine the accuracy/speed trade-off in multi-modal classification; and the first to directly show the value of discretized features in this particular task. If current trends continue, the Web will become increasingly multi-modal, making the question of multi-modal classification ever more pertinent. At the same time, as the Web keeps growing, we have to be able to efficiently handle ever larger quantities of data, making it important to focus on machine learning methods that can be applied to large-scale scenarios. This work aims to examine these two questions together. Our contributions are as follows. First, we compare various multi-modal fusion methods, examine their trade-offs, and show that simpler models are often desirable. Second, we experiment with discretizing continuous features in order to speed up and simplify the fusion process even further. Third, we examine learned representations for discretized features and show that they yield interpretability as a beneficial side effect. The work reported here constitutes a solid and scalable baseline for other approaches to follow; our investigation of discretized features shows how multi-modal classification does not necessarily imply a large performance penalty and is feasible in large-scale scenarios.", "title": "" }, { "docid": "879e1d7290abd5331622512ae62f1eb4", "text": "In this letter, a novel tapered slot edge with resonant cavity (TSERC) structure is adopted to improve the design of a planar printed conventional Vivaldi antenna. The proposed modified structure has the capacity to extend the low-end bandwidth limitation. In addition, the directivity and antenna gain of the TSERC structure Vivaldi antenna has been significantly improved when compared to a conventional Vivaldi antenna of the same size at lower frequencies. Compared to the conventional Vivaldi antenna, the TSERC structure lowers the gain at the higher frequencies. A prototype of the modified Vivaldi antenna was fabricated and tested. The measured results were found to be in good agreement with the simulated, which validates the feasibility of this novel design.", "title": "" }, { "docid": "c2a6008cf67322430d91df65520c17e4", "text": "In the future, solar energy will be a very important energy source. Several studies suppose that more than 45% of the energy in the world will be generated by photovoltaic array. Therefore it is necessary to concentrate our forces to reduce the application costs and to increment their performance. In order to reach the last aspect, it is important to note that the output characteristic of a photovoltaic array is nonlinear and changes with solar irradiation and cell’s temperature. Therefore a Maximum Power Point Tracking (MPPT) technique is needed to maximize the produced energy. This paper presents a comparative study of seven widely-adopted MPPT algorithms; their performance is evaluated using, for all the techniques, a common device with minimum hardware variations. In particular, this study compares the behaviors of each technique in presence of solar irradiation variations.", "title": "" } ]
scidocsrr
d48e0a7f0c08d5d3a306d9b744c56731
Memory scaling: A systems architecture perspective
[ { "docid": "73284fdf9bc025672d3b97ca5651084a", "text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. Understanding, characterizing, and modeling the distribution of the threshold voltages across different cells in a modern multi-level cell (MLC) flash memory can enable the design of more effective and efficient error correction mechanisms to combat this degradation. We show the first published experimental measurement-based characterization of the threshold voltage distribution of flash memory. To accomplish this, we develop a testing infrastructure that uses the read retry feature present in some 2Y-nm (i.e., 20-24nm) flash chips. We devise a model of the threshold voltage distributions taking into account program/erase (P/E) cycle effects, analyze the noise in the distributions, and evaluate the accuracy of our model. A key result is that the threshold voltage distribution can be modeled, with more than 95% accuracy, as a Gaussian distribution with additive white noise, which shifts to the right and widens as P/E cycles increase. The novel characterization and models provided in this paper can enable the design of more effective error tolerance mechanisms for future flash memories.", "title": "" }, { "docid": "3763da6b72ee0a010f3803a901c9eeb2", "text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.", "title": "" } ]
[ { "docid": "653ca5c9478b1b1487fc24eeea8c1677", "text": "A fundamental question in information theory and in computer science is how to measure similarity or the amount of shared information between two sequences. We have proposed a metric, based on Kolmogorov complexity, to answer this question and have proven it to be universal. We apply this metric in measuring the amount of shared information between two computer programs, to enable plagiarism detection. We have designed and implemented a practical system SID (Software Integrity Diagnosis system) that approximates this metric by a heuristic compression algorithm. Experimental results demonstrate that SID has clear advantages over other plagiarism detection systems. SID system server is online at http://software.bioinformatics.uwaterloo.ca/SID/.", "title": "" }, { "docid": "2fc1afae973ddd832afa92d27222ef09", "text": "In our 1990 paper, we showed that managers concerned with their reputations might choose to mimic the behavior of other managers and ignore their own information. We presented a model in which “smart” managers receive correlated, informative signals, whereas “dumb” managers receive independent, uninformative signals. Managers have an incentive to follow the herd to indicate to the labor market that they have received the same signal as others, and hence are likely to be smart. This model of reputational herding has subsequently found empirical support in a number of recent papers, including Judith A. Chevalier and Glenn D. Ellison’s (1999) study of mutual fund managers and Harrison G. Hong et al.’s (2000) study of equity analysts. We argued in our 1990 paper that reputational herding “requires smart managers’ prediction errors to be at least partially correlated with each other” (page 468). In their Comment, Marco Ottaviani and Peter Sørensen (hereafter, OS) take issue with this claim. They write: “correlation is not necessary for herding, other than in degenerate cases.” It turns out that the apparent disagreement hinges on how strict a definition of herding one adopts. In particular, we had defined a herding equilibrium as one in which agentB alwaysignores his own information and follows agent A. (See, e.g., our Propositions 1 and 2.) In contrast, OS say that there is herding when agent B sometimesignores his own information and follows agent A. The OS conclusion is clearly correct given their weaker definition of herding. At the same time, however, it also seems that for the stricter definition that we adopted in our original paper, correlated errors on the part of smart managers are indeed necessary for a herding outcome—even when one considers the expanded parameter space that OS do. We will try to give some intuition for why the different definitions of herding lead to different conclusions about the necessity of correlated prediction errors. Along the way, we hope to convince the reader that our stricter definition is more appropriate for isolating the economic effects at work in the reputational herding model. An example is helpful in illustrating what is going on. Consider a simple case where the parameter values are as follows: p 5 3⁄4; q 5 1⁄4; z 5 1⁄2, andu 5 1⁄2. In our 1990 paper, we also imposed the constraint that z 5 ap 1 (1 2 a)q, which further implies thata 5 1⁄2. The heart of the OS Comment is the idea that this constraint should be disposed of—i.e., we should look at other values of a. Without loss of generality, we will consider values of a above 1⁄2, and distinguish two cases.", "title": "" }, { "docid": "b929cbcaf8de8e845d1cf7f59d3eca63", "text": "This paper presents 35 GHz single-pole-single-throw (SPST) and single-pole-double-throw (SPDT) CMOS switches using a 0.13 mum BiCMOS process (IBM 8 HP). The CMOS transistors are designed to have a high substrate resistance to minimize the insertion loss and improve power handling capability. The SPST/SPDT switches have a insertion loss of 1.8 dB/2.2 dB, respectively, and an input 1-dB compression point (P1 dB) greater than 22 dBm. The isolation is greater than 30 dB at 35-40 GHz and is achieved using two parallel resonant networks. To our knowledge, this is the first demonstration of low-loss, high-isolation CMOS switches at Ka-band frequencies.", "title": "" }, { "docid": "a54ac6991dce07d51ac028b8a249219e", "text": "Rearrangement of immunoglobulin heavy-chain variable (VH) gene segments has been suggested to be regulated by interleukin 7 signaling in pro–B cells. However, the genetic evidence for this recombination pathway has been challenged. Furthermore, no molecular components that directly control VH gene rearrangement have been elucidated. Using mice deficient in the interleukin 7–activated transcription factor STAT5, we demonstrate here that STAT5 regulated germline transcription, histone acetylation and DNA recombination of distal VH gene segments. STAT5 associated with VH gene segments in vivo and was recruited as a coactivator with the transcription factor Oct-1. STAT5 did not affect the nuclear repositioning or compaction of the immunoglobulin heavy-chain locus. Therefore, STAT5 functions at a distinct step in regulating distal VH recombination in relation to the transcription factor Pax5 and histone methyltransferase Ezh2.", "title": "" }, { "docid": "8182fe419366744a774ff637c8ace5dd", "text": "The most useful environments for advancing research and development in video databases are those that provide complete video database management, including (1) video preprocessing for content representation and indexing, (2) storage management for video, metadata and indices, (3) image and semantic -based query processing, (4) realtime buffer management, and (5) continuous media streaming. Such environments support the entire process of investigating, implementing, analyzing and evaluating new techniques, thus identifying in a concrete way which techniques are truly practical and robust. In this paper we present a video database research initiative that culminated in the successful development of VDBMS, a video database research platform that supports comprehensive and efficient database management for digital video. We describe key video processing components of the system and illustrate the value of VDBMS as a research platform by describing several research projects carried out within the VDBMS environment. These include MPEG7 document support for video feature import and export, a new query operator for optimal multi-feature image similarity matching, secure access control for streaming video, and the mining of medical video data using hierarchical content organization.", "title": "" }, { "docid": "ec189ac55b64402d843721de4fc1f15c", "text": "DroidMiner is a new malicious Android app detection system that uses static analysis to automatically mine malicious program logic from known Android malware. DroidMiner uses a behavioral graph to abstract malware program logic into a sequence of threat modalities, and then applies machine-learning techniques to identify and label elements of the graph that match harvested threat modalities. Once trained on a mobile malware corpus, DroidMiner can automatically scan a new Android app to (i) determine whether it contains malicious modalities, (ii) diagnose the malware family to which it is most closely associated, and (iii) precisely characterize behaviors found within the analyzed app. While DroidMiner is not the first to attempt automated classification of Android applications based on Framework API calls, it is distinguished by its development of modalities that are resistant to noise insertions and its use of associative rule mining that enables automated association of malicious behaviors with modalities. We evaluate DroidMiner using 2,466 malicious apps, identified from a corpus of over 67,000 third-party market Android apps, plus an additional set of over 10,000 official market Android apps. Using this set of real-world apps, DroidMiner achieves a 95.3% detection rate, with a 0.4% false positive rate. We further evaluate DroidMiner’s ability to classify malicious apps under their proper family labels, and measure its label accuracy at 92%.", "title": "" }, { "docid": "e94c9f0ef8e696a1b2e85f18f98d2e36", "text": "Driven by pervasive mobile devices and ubiquitous wireless communication networks, mobile cloud computing emerges as an appealing paradigm to accommodate demands for running power-hungry or computation-intensive applications over resource-constrained mobile devices. Cloudlets that move available resources closer to the network edge offer a promising architecture to support real-time applications, such as online gaming and speech recognition. To stimulate service provisioning by cloudlets, it is essential to design an incentive mechanism that charges mobile devices and rewards cloudlets. Although auction has been considered as a promising form for incentive, it is challenging to design an auction mechanism that holds certain desirable properties for the cloudlet scenario. In this paper, we propose an incentive-compatible auction mechanism (ICAM) for the resource trading between the mobile devices as service users (buyers) and cloudlets as service providers (sellers). ICAM can effectively allocate cloudlets to satisfy the service demands of mobile devices and determine the pricing. Both the theoretical analysis and the numerical results show that the ICAM guarantees desired properties with respect to individual rationality, budget balance and truthfulness (incentive compatibility) for both the buyers and the sellers, and computational efficiency.", "title": "" }, { "docid": "a56bae9c90490b8630fcb421bc478e56", "text": "A 48-year-old Nicaraguan man underwent mag netic resonance imaging (MRI) 3 days after admission to a South Florida hospital for treatment of cellulitis of the right thigh with vancomycin. MRI had been ordered to evaluate for a possible drainable source of infection, as the clinical picture and duration of illness was worsening and longer than expected for typical uncomplicated cellulitis despite intravenous antibiotic therapy. The MRI showed multiple enlarged inguinal lymph nodes and cellulitis of the superfi cial soft tissues of the thigh without discrete drainable collections. After the procedure, the patient was noted to have a bullous lesion on each thigh, each lesion roughly 1 cm in diameter and fi lled with clear serous fl uid (Figure 1). He reported that his legs had been pressed together before entering the MRI machine and that he had felt a burning sensation in both thighs during the test. Examination confi rmed that the lesions indeed aligned with each other when he pressed his thighs together. Study of a biopsy of one of the lesions revealed subepidermal cell blisters with focal epidermal necrosis and coagulative changes in the superfi cial dermis, consistent with a thermal injury (Figure 2).", "title": "" }, { "docid": "5d3275250a345b5f8c8a14a394025a31", "text": "Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach.", "title": "" }, { "docid": "2f1dc4a089f88d6f7e39b10f53321e89", "text": "⎯ A new technique for summarizing news articles using a neural network is presented. A neural network is trained to learn the relevant characteristics of sentences that should be included in the summary of the article. The neural network is then modified to generalize and combine the relevant characteristics apparent in summary sentences. Finally, the modified neural network is used as a filter to summarize news articles.", "title": "" }, { "docid": "a9e26514ffc78c1018e00c63296b9584", "text": "When labeled examples are limited and difficult to obtain, transfer learning employs knowledge from a source domain to improve learning accuracy in the target domain. However, the assumption made by existing approaches, that the marginal and conditional probabilities are directly related between source and target domains, has limited applicability in either the original space or its linear transformations. To solve this problem, we propose an adaptive kernel approach that maps the marginal distribution of target-domain and source-domain data into a common kernel space, and utilize a sample selection strategy to draw conditional probabilities between the two domains closer. We formally show that under the kernel-mapping space, the difference in distributions between the two domains is bounded; and the prediction error of the proposed approach can also be bounded. Experimental results demonstrate that the proposed method outperforms both traditional inductive classifiers and the state-of-the-art boosting-based transfer algorithms on most domains, including text categorization and web page ratings. In particular, it can achieve around 10% higher accuracy than other approaches for the text categorization problem. The source code and datasets are available from the authors.", "title": "" }, { "docid": "f374d363865d6f1102cbebd061380650", "text": "OBJECTIVES\nTo identify lessons from and gaps in research on diet-disease links among former migrants in the United Kingdom (UK).\n\n\nRESULTS\nMigrant status and self-identified ethnicity do not match so these terms mask differences in social, nutritional and health status within and between population groups. Some former migrants differ in causes of death from the general population, e.g.: fewer coronary heart disease deaths among Caribbean-born; fewer cancer deaths among Caribbean, South Asian- and East African-born adults. Irish- and Scottish-born have higher mortality from all causes. Experience of risk factors differ also, e.g.: higher prevalences of hypertension and diabetes in Caribbean- and South Asian-born adults than representative samples of the general population; obesity and raised waist-hip circumference ratios in South Asian, African-Caribbean and some Irish-born adults. Former migrants experience long-term disadvantage, associated with more self-defined illness and lower reported physical activity. Nutrient intake data from the few, recent, small-scale studies must be interpreted with caution due to methodological diversity. However, second generation offspring of former migrants appear to adopt British dietary patterns, increasing fat and reducing vegetable, fruit and pulse consumption compared with first generation migrants.\n\n\nCONCLUSIONS\nThere is insufficient evidence on why some former migrants but not others experience lower specific mortality than the general population. Dietary intake variations provide important clues particularly when examined by age and migration status. Majority ethnic and younger migrant groups could raise and sustain high fruit and vegetable intakes but lower proportions of fat, by adopting many dietary practices from older migrants. Objective measures of physical activity and longitudinal studies of diets among different ethnic groups are needed to explain diversity in health outcomes and provide for evidence-based action.", "title": "" }, { "docid": "ecc4f1d5fb66b816daa9ae514bd58b45", "text": "In this paper, we introduce SLQS, a new entropy-based measure for the unsupervised identification of hypernymy and its directionality in Distributional Semantic Models (DSMs). SLQS is assessed through two tasks: (i.) identifying the hypernym in hyponym-hypernym pairs, and (ii.) discriminating hypernymy among various semantic relations. In both tasks, SLQS outperforms other state-of-the-art measures.", "title": "" }, { "docid": "6eebd82e4d2fe02e9b26190638e9d159", "text": "Agile development methodologies have been gaining acceptance in the mainstream software development community. While there are numerous studies of agile development in academic and educational settings, there has been little detailed reporting of the usage, penetration and success of agile methodologies in traditional, professional software development organizations. We report on the results of an empirical study conducted at Microsoft to learn about agile development and its perception by people in development, testing, and management. We found that one-third of the study respondents use agile methodologies to varying degrees, and most view it favorably due to improved communication between team members, quick releases and the increased flexibility of agile designs. The scrum variant of agile methodologies is by far the most popular at Microsoft. Our findings also indicate that developers are most worried about scaling agile to larger projects (greater than twenty members), attending too many meetings and the coordinating agile and non-agile teams.", "title": "" }, { "docid": "30986d682a82bdcfebaba54154a190e3", "text": "Scheduling has been an active area of research in computing systems since their inception. Hadoop framework has become very much popular and most widely used in distributed data processing. Hadoop has become a central platform to store big data through its Hadoop Distributed File System (HDFS) as well as to run analytics on this stored big data using its MapReduce component. The main objective is to study MapReduce framework, MapReduce model, scheduling in hadoop, various scheduling algorithms and various optimization techniques in job scheduling. Scheduling algorithms of MapReduce model using hadoop vary with design and behaviour, and are used for handling many issues like data locality, awareness with resource, energy and time.", "title": "" }, { "docid": "20d96905880332d6ef5a33b4dd0d8827", "text": "In spite of the fact that equal opportunities for men and women have been a priority in many countries, enormous gender differences prevail in most competitive high-ranking positions. We conduct a series of controlled experiments to investigate whether women might react differently than men to competitive incentive schemes commonly used in job evaluation and promotion. We observe no significant gender difference in mean performance when participants are paid proportional to their performance. But in the competitive environment with mixed gender groups we observe a significant gender difference: the mean performance of men has a large and significant, that of women is unchanged. This gap is not due to gender differences in risk aversion. We then run the same test with homogeneous groups, to investigate whether women under-perform only when competing against men. Women do indeed increase their performance and gender differences in mean performance are now insignificant. These results may be due to lower skill of women, or more likely to the fact that women dislike competition, or alternatively that they feel less competent than their male competitors, which depresses their performance in mixed tournaments. Our last experiment provides support for this hypothesis.", "title": "" }, { "docid": "c01072bc843aafc88b157b6de1878829", "text": "A new MEMS capacitive accelerometer has been developed to meet the requirements for oil and gas exploration, specifically for imaging deep and complex subterranean features. The sensor has been optimized to have a very low noise floor in a frequency range of 1–200 Hz. Several design and process parameters were modified from our previous sensors to reduce noise. Testing of the sensor has demonstrated a noise floor of 10ng/√Hz, in agreement with our predictive noise models. The sensor has a dynamic range of 120db with a maximum acceleration of +/− 80mg. In addition to the performance specifications, automated calibration routines have been implemented, allowing bias and sensitivity calibrations to be done in the field to ensure valid and accurate data. The sensor frequency and quality factor can also be measured in the field for an automated sensor health check.", "title": "" }, { "docid": "60fa6928d67628eb0cc695a677a3f1c9", "text": "The assumption that there are innate integrative or actualizing tendencies underlying personality and social development is reexamined. Rather than viewing such processes as either nonexistent or as automatic, I argue that they are dynamic and dependent upon social-contextual supports pertaining to basic human psychological needs. To develop this viewpoint, I conceptually link the notion of integrative tendencies to specific developmental processes, namely intrinsic motivation; internalization; and emotional integration. These processes are then shown to be facilitated by conditions that fulfill psychological needs for autonomy, competence, and relatedness, and forestalled within contexts that frustrate these needs. Interactions between psychological needs and contextual supports account, in part, for the domain and situational specificity of motivation, experience, and relative integration. The meaning of psychological needs (vs. wants) is directly considered, as are the relations between concepts of integration and autonomy and those of independence, individualism, efficacy, and cognitive models of \"multiple selves.\"", "title": "" }, { "docid": "f43478501471f5b9b8de429958016b7d", "text": "A growing amount of consumers are making purchases online. Due to this rise in online retail, online credit card fraud is increasingly becoming a common type of theft. Previously used rule based systems are no longer scalable, because fraudsters can adapt their strategies over time. The advantage of using machine learning is that it does not require an expert to design rules which need to be updated periodically. Furthermore, algorithms can adapt to new fraudulent behaviour by retraining on newer transactions. Nevertheless, fraud detection by means of data mining and machine learning comes with a few challenges as well. The very unbalanced nature of the data and the fact that most payment processing companies only process a fragment of the incoming traffic from merchants, makes it hard to detect reliable patterns. Previously done research has focussed mainly on augmenting the data with useful features in order to improve the detectable patterns. These papers have proven that focussing on customer transaction behavior provides the necessary patterns in order to detect fraudulent behavior. In this thesis we propose several bayesian network models which rely on latent representations of fraudulent transactions, non-fraudulent transactions and customers. These representations are learned using unsupervised learning techniques. We show that the methods proposed in this thesis significantly outperform state-of-the-art models without using elaborate feature engineering strategies. A portion of this thesis focuses on re-implementing two of these feature engineering strategies in order to support this claim. Results from these experiments show that modeling fraudulent and non-fraudulent transactions individually generates the best performance in terms of classification accuracy. In addition, we focus on varying the dimensions of the latent space in order to assess its effect on performance. Our final results show that a higher dimensional latent space does not necessarily improve the performance of our models.", "title": "" }, { "docid": "d1eed1d7875930865944c98fbab5f7e1", "text": "Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and “gold standard” is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.", "title": "" } ]
scidocsrr
9e8faec46b167c8aa3531a7e88cb4905
Combined Depth and Outlier Estimation in Multi-View Stereo
[ { "docid": "db8325925cb9fd1ebdcf7480735f5448", "text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.", "title": "" } ]
[ { "docid": "5efebde0526dbb7015ecef066b76d1a9", "text": "Recent advances in mixed-reality technologies have renewed interest in alternative modes of communication for human-robot interaction. However, most of the work in this direction has been confined to tasks such as teleoperation, simulation or explication of individual actions of a robot. In this paper, we will discuss how the capability to project intentions affect the task planning capabilities of a robot. Specifically, we will start with a discussion on how projection actions can be used to reveal information regarding the future intentions of the robot at the time of task execution. We will then pose a new planning paradigm - projection-aware planning - whereby a robot can trade off its plan cost with its ability to reveal its intentions using its projection actions. We will demonstrate each of these scenarios with the help of a joint human-robot activity using the HoloLens.", "title": "" }, { "docid": "dcbfaec8966e10b8b87311f17bf9a3c5", "text": "The study presented here investigated the effects of emotional valence on the memory for words by assessing both memory performance and pupillary responses during a recognition memory task. Participants had to make speeded judgments on whether a word presented in the test phase of the experiment had already been presented (\"old\") or not (\"new\"). An emotion-induced recognition bias was observed: Words with emotional content not only produced a higher amount of hits, but also elicited more false alarms than neutral words. Further, we found a distinct pupil old/new effect characterized as an elevated pupillary response to hits as opposed to correct rejections. Interestingly, this pupil old/new effect was clearly diminished for emotional words. We therefore argue that the pupil old/new effect is not only able to mirror memory retrieval processes, but also reflects modulation by an emotion-induced recognition bias.", "title": "" }, { "docid": "8b46e6e341f4fdf4eb18e66f237c4000", "text": "We present a general learning-based approach for phrase-level sentiment analysis that adopts an ordinal sentiment scale and is explicitly compositional in nature. Thus, we can model the compositional effects required for accurate assignment of phrase-level sentiment. For example, combining an adverb (e.g., “very”) with a positive polar adjective (e.g., “good”) produces a phrase (“very good”) with increased polarity over the adjective alone. Inspired by recent work on distributional approaches to compositionality, we model each word as a matrix and combine words using iterated matrix multiplication, which allows for the modeling of both additive and multiplicative semantic effects. Although the multiplication-based matrix-space framework has been shown to be a theoretically elegant way to model composition (Rudolph and Giesbrecht, 2010), training such models has to be done carefully: the optimization is nonconvex and requires a good initial starting point. This paper presents the first such algorithm for learning a matrix-space model for semantic composition. In the context of the phrase-level sentiment analysis task, our experimental results show statistically significant improvements in performance over a bagof-words model.", "title": "" }, { "docid": "f462de59dd8b45f7c7e27672125010d2", "text": "Researchers have recently noted (14; 27) the potential of fast poisoning attacks against DNS servers, which allows attackers to easily manipulate records in open recursive DNS resolvers. A vendor-wide upgrade mitigated but did not eliminate this attack. Further, existing DNS protection systems, including bailiwick-checking (12) and IDS-style filtration, do not stop this type of DNS poisoning. We therefore propose Anax, a DNS protection system that detects poisoned records in cache. Our system can observe changes in cached DNS records, and applies machine learning to classify these updates as malicious or benign. We describe our classification features and machine learning model selection process while noting that the proposed approach is easily integrated into existing local network protection systems. To evaluate Anax, we studied cache changes in a geographically diverse set of 300,000 open recursive DNS servers (ORDNSs) over an eight month period. Using hand-verified data as ground truth, evaluation of Anax showed a very low false positive rate (0.6% of all new resource records) and a high detection", "title": "" }, { "docid": "2a38b72183fa2014697fc3bf89b3bd73", "text": "Neural collaborative filtering (NCF) [15] and recurrent recommender systems (RRN) [38] have been successful in modeling user-item relational data. However, they are also limited in their assumption of static or sequential modeling of relational data as they do not account for evolving users’ preference over time as well as changes in the underlying factors that drive the change in user-item relationship over time. We address these limitations by proposing a Neural network based Tensor Factorization (NTF) model for predictive tasks on dynamic relational data. The NTF model generalizes conventional tensor factorization from two perspectives: First, it leverages the long short-term memory architecture to characterize the multi-dimensional temporal interactions on relational data. Second, it incorporates the multi-layer perceptron structure for learning the non-linearities between different latent factors. Our extensive experiments demonstrate the significant improvement in rating prediction and link prediction on dynamic relational data by our NTF model over both neural network based factorization models and other traditional methods. ACM Reference Format: Xian Wu, Baoxu Shi, Yuxiao Dong, Chao Huang, Nitesh V. Chawla. 2018. Neural Tensor Factorization. In Proceedings of International Conference of Knowledge Discovery and Data Mining (KDD’18). ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn", "title": "" }, { "docid": "52e28bd011df723642b6f4ee83ab448d", "text": "Researchers in a variety of fields, including aeolian science, biology, and environmental science, have already made use of stationary and mobile remote sensing equipment to increase their variety of data collection opportunities. However, due to mobility challenges, remote sensing opportunities relevant to desert environments and in particular dune fields have been limited to stationary equipment. We describe here an investigative trip to two well-studied experimental deserts in New Mexico with DRHex, a mobile remote sensing platform oriented towards desert research. D-RHex is the latest iteration of the RHex family of robots, which are six-legged, biologically inspired, small (10kg) platforms with good mobility in a variety of rough terrains, including on inclines and over obstacles of higher than robot hip height.", "title": "" }, { "docid": "c279eb1ca03937d8321beb4c3c448e81", "text": "This paper describes the development processes for a cross-platform ubiquitous language learning service via interactive television (iTV) and mobile phone. Adapting a learner-centred design methodology, a number of requirements were gathered from multiple sources that were subsequently used in TAMALLE (television and mobile phone assisted language learning environment) development.Anumber of issues that arise in the context of cross-platform user interface design and architecture for ubiquitous language learning were tackled. Finally, we discuss a multi-method evaluation regime to gauge usability, perceived usefulness and desirability of TAMALLE system. The result broadly revealed an overall positive response from language learners. Although, there were some reported difficulties in reading text and on-screen display mainly on the iTV side of the interface, TAMALLE was perceived to be a usable, useful and desirable tool to support informal language learning and also for gaining new contextual and cultural knowledge.", "title": "" }, { "docid": "867e59b8f2dd4ccc0fdd3820853dc60e", "text": "Software product lines are hard to configure. Techniques that work for medium sized product lines fail for much larger product lines such as the Linux kernel with 6000+ features. This paper presents simple heuristics that help the Indicator-Based Evolutionary Algorithm (IBEA) in finding sound and optimum configurations of very large variability models in the presence of competing objectives. We employ a combination of static and evolutionary learning of model structure, in addition to utilizing a pre-computed solution used as a “seed” in the midst of a randomly-generated initial population. The seed solution works like a single straw that is enough to break the camel's back -given that it is a feature-rich seed. We show promising results where we can find 30 sound solutions for configuring upward of 6000 features within 30 minutes.", "title": "" }, { "docid": "95e3da5f05e2ec86cb4e3ce23da15de1", "text": "The aim of this paper is to show a methodology to perform the mechanical design of a 6-DOF lightweight manipulator for assembling bar structures using a rotary-wing UAV. The architecture of the aerial manipulator is based on a comprehensive performance analysis, a manipulability study of the different options and a previous evaluation of the required motorization. The manipulator design consists of a base attached to the UAV landing gear, a robotic arm that supports 6-DOF, and a gripper-style end effector specifically developed for grasping bars as a result of this study. An analytical expression of the manipulator kinematic model is obtained.", "title": "" }, { "docid": "15cb7023c175e2c92cd7b392205fb87f", "text": "Feedback has a strong influence on effective learning from computer-based instruction. Prior research on feedback in computer-based instruction has mainly focused on static feedback schedules that employ the same feedback schedule throughout an instructional session. This study examined transitional feedback schedules in computer-based multimedia instruction on procedural problem-solving in electrical circuit analysis. Specifically, we compared two transitional feedback schedules: the TFS-P schedule switched from initial feedback after each problem step to feedback after a complete problem at later learning states; the TFP-S schedule transitioned from feedback after a complete problem to feedback after each problem step. As control conditions, we also considered two static feedback schedules, namely providing feedback after each practice problem-solving step (SFS) or providing feedback after attempting a complete multi-step practice problem (SFP). Results indicate that the static stepwise (SFS) and transitional stepwise to problem (TFS-P) feedback produce higher problem solving near-transfer post-test performance than static problem (SFP) and transitional problem to step (TFP-S) feedback. Also, TFS-P resulted in higher ratings of program liking and feedback helpfulness than TFP-S. Overall, the study results indicate benefits of maintaining high feedback frequency (SFS) and reducing feedback frequency (TFS-P) compared to low feedback frequency (SFP) or increasing feedback frequency (TFP-S) as novice learners acquire engineering problem solving skills. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "52a5f4c15c1992602b8fe21270582cc6", "text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.", "title": "" }, { "docid": "ec58969d15eb194fd7cb57843124b425", "text": "Fully convolutional neural networks give accurate, per-pixel prediction for input images and have applications like semantic segmentation. However, a typical FCN usually requires lots of floating point computation and large run-time memory, which effectively limits its usability. We propose a method to train Bit Fully Convolution Network (BFCN), a fully convolutional neural network that has low bit-width weights and activations. Because most of its computation-intensive convolutions are accomplished between low bit-width numbers, a BFCN can be accelerated by an efficient bit-convolution implementation. On CPU, the dot product operation between two bit vectors can be reduced to bitwise operations and popcounts, which can offer much higher throughput than 32-bit multiplications and additions. To validate the effectiveness of BFCN, we conduct experiments on the PASCAL VOC 2012 semantic segmentation task and Cityscapes. Our BFCN with 1-bit weights and 2-bit activations, which runs 7.8x faster on CPU or requires less than 1% resources on FPGA, can achieve comparable performance as the 32-bit counterpart. Introduction Deep convolutional neural networks (DCNN), with its recent progress, has considerably changed the landscape of computer vision (Krizhevsky, Sutskever, and Hinton 2012) and many other fields. To achieve close to state-of-the-art performance, a DCNN usually has a lot of parameters and high computational complexity, which may easily overwhelm resource capability of embedded devices. Substantial research efforts have been invested in speeding up DCNNs on both general-purpose (Vanhoucke, Senior, and Mao 2011; Gong et al. 2014; Han et al. 2015) and specialized computer hardware (Farabet et al. 2009; Farabet et al. 2011; Pham et al. 2012; Chen et al. 2014b; Chen et al. 2014c; Zhang et al. 2015a). Recent progress in using low bit-width networks has considerably reduced parameter storage size and computation burden by using 1-bit weight and low bit-width activations. In particular, in BNN (Kim and Smaragdis 2016) and XNOR-net (Rastegari et al. 2016), during the forward pass the most computationally expensive convolutions can Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. network VOC12 Cityscapes speedup 32-bit FCN 69.8% 62.1% 1x 2-bit BFCN 67.0% 60.3% 4.1x 1-2 BFCN 62.8% 57.4% 7.8x Table 1: Summary results of our BFCNs. Performance measure in mean IoU. be done by combining xnor and popcount operations, thanks to the following equivalence when x and y are bit vectors:", "title": "" }, { "docid": "c8cd0f14edee76888e4f1fd0ccc72dfa", "text": "BACKGROUND\nTotal hip and total knee arthroplasties are well accepted as reliable and suitable surgical procedures to return patients to function. Health-related quality-of-life instruments have been used to document outcomes in order to optimize the allocation of resources. The objective of this study was to review the literature regarding the outcomes of total hip and knee arthroplasties as evaluated by health-related quality-of-life instruments.\n\n\nMETHODS\nThe Medline and EMBASE medical literature databases were searched, from January 1980 to June 2003, to identify relevant studies. Studies were eligible for review if they met the following criteria: (1). the language was English or French, (2). at least one well-validated and self-reported health-related quality of life instrument was used, and (3). a prospective cohort study design was used.\n\n\nRESULTS\nOf the seventy-four studies selected for the review, thirty-two investigated both total hip and total knee arthroplasties, twenty-six focused on total hip arthroplasty, and sixteen focused on total knee arthroplasty exclusively. The most common diagnosis was osteoarthritis. The duration of follow-up ranged from seven days to seven years, with the majority of studies describing results at six to twelve months. The Short Form-36 and the Western Ontario and McMaster University Osteoarthritis Index, the most frequently used instruments, were employed in forty and twenty-eight studies, respectively. Seventeen studies used a utility index. Overall, total hip and total knee arthroplasties were found to be quite effective in terms of improvement in health-related quality-of-life dimensions, with the occasional exception of the social dimension. Age was not found to be an obstacle to effective surgery, and men seemed to benefit more from the intervention than did women. When improvement was found to be modest, the role of comorbidities was highlighted. Total hip arthroplasty appears to return patients to function to a greater extent than do knee procedures, and primary surgery offers greater improvement than does revision. Patients who had poorer preoperative health-related quality of life were more likely to experience greater improvement.\n\n\nCONCLUSIONS\nHealth-related quality-of-life data are valuable, can provide relevant health-status information to health professionals, and should be used as a rationale for the implementation of the most adequate standard of care. Additional knowledge and scientific dissemination of surgery outcomes should help to ensure better management of patients undergoing total hip or total knee arthroplasty and to optimize the use of these procedures.", "title": "" }, { "docid": "9611a618b33547851974b68197698eaf", "text": "Today’s Internet appliances feature user interface technologies almost unknown a few years ago: touch screens, styli, handwriting and voice recognition, speech synthesis, tiny screens, and more. This richness creates problems. First, different appliances use different languages: WML for cell phones; SpeechML, JSML, and VoxML for voice enabled devices such as phones; HTML and XUL for desktop computers, and so on. Thus, developers must maintain multiple source code families to deploy interfaces to one information system on multiple appliances. Second, user interfaces differ dramatically in complexity (e.g, PC versus cell phone interfaces). Thus, developers must also manage interface content. Third, developers risk writing appliance-specific interfaces for an appliance that might not be on the market tomorrow. A solution is to build interfaces with a single, universal language free of assumptions about appliances and interface technology. This paper introduces such a language, the User Interface Markup Language (UIML), an XML-compliant language. UIML insulates the interface designer from the peculiarities of different appliances through style sheets. A measure of the power of UIML is that it can replace hand-coding of Java AWT or Swing user interfaces.  1999 Published by Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "0e510d0c460ba1555849967e6a74b97d", "text": "The dynamic mathematical models of six channels of six-axis wrist force/torque sensor for robots are built up by the system identification method based on the experimental data of step-response calibrations. The performance indexes of frequency domain are determined, and the dynamic characteristics of the wrist force sensor are described accurately. In order to speed the dynamic responses of the wrist force sensor, the dynamic compensating devices are designed using the method of functional link artificial neural network, and a digital-signal-processor-based real-time dynamic compensation system is developed, which includes six compensating devices. The experimental results show that the adjusting time of dynamic response of the wrist force sensor is reduced to less than 25%.", "title": "" }, { "docid": "7b681d1f200c0281beb161b71e6a3604", "text": "Data quality remains a persistent problem in practice and a challenge for research. In this study we focus on the four dimensions of data quality noted as the most important to information consumers, namely accuracy, completeness, consistency, and timeliness. These dimensions are of particular concern for operational systems, and most importantly for data warehouses, which are often used as the primary data source for analyses such as classification, a general type of data mining. However, the definitions and conceptual models of these dimensions have not been collectively considered with respect to data mining in general or classification in particular. Nor have they been considered for problem complexity. Conversely, these four dimensions of data quality have only been indirectly addressed by data mining research. Using definitions and constructs of data quality dimensions, our research evaluates the effects of both data quality and problem complexity on generated data and tests the results in a real-world case. Six different classification outcomes selected from the spectrum of classification algorithms show that data quality and problem complexity have significant main and interaction effects. From the findings of significant effects, the economics of higher data quality are evaluated for a frequent application of classification and illustrated by the real-world case.", "title": "" }, { "docid": "782871adcc65ce83e92728ca9151ceff", "text": "OBJECTIVES\nA number of studies have pointed to the pressure that housing costs can exert on the resources available for food. The objectives of the present study were to characterise the relationship between the proportion of income absorbed by housing and the adequacy of household food expenditures across the Canadian population and within income quintiles; and to elucidate the impact of receipt of a housing subsidy on adequacy of food expenditures among low-income tenant households.\n\n\nDESIGN\nThe 2001 Survey of Household Spending, conducted by Statistics Canada, was a national cross-sectional survey that collected detailed information on expenditures on goods and services. The adequacy of food spending was assessed in relation to the cost of a basic nutritious diet.\n\n\nSETTING\nCanada.\n\n\nSUBJECTS\nThe person with primary responsibility for financial maintenance from 15 535 households from all provinces and territories.\n\n\nRESULTS\nAs the proportion of income allocated to housing increased, food spending adequacy declined significantly among households in the three lowest income quintiles. After accounting for household income and composition, receipt of a housing subsidy was associated with an improvement in adequacy of food spending among low-income tenant households, but still mean food spending fell below the cost of a basic nutritious diet even among subsidised households.\n\n\nCONCLUSIONS\nThis study indicates that housing costs compromise the food access of some low-income households and speaks to the need to re-examine policies related to housing affordability and income adequacy.", "title": "" }, { "docid": "c04ae48f1ff779da8a565653c0976636", "text": "It is widely agreed on that most cognitive processes are contextual in the sense that they depend on the environment, or context, inside which they are carried on. Even concentrating on the issue of contextuality in reasoning, many different notions of context can be found in the Artificial Intelligence literature, see for instance [Giunchiglia 1991a, Giunchiglia & Weyhrauch 1988, Guha 1990, Guha & Lenat 1990, Shoham 1991, McCarthy 1990b]. Our intuition is that reasoning is usually performed on a subset of the global knowledge base; we never consider all we know but only a very small subset of it. The notion of context is used as a means of formalizing this idea of localization. Roughly speaking, we take a context to be the set of facts used locally to prove a given goal plus the inference routines used to reason about them (which in general are different for different sets of facts). Our perspective is similar to that proposed in [McCarthy 1990b, McCarthy 1991]. The goal of this paper is to propose an epistemologically adequate theory of reasoning with contexts. The emphasis is on motivations and intuitions, rather than on technicalities. The two basic definitions are reported in appendix A. Ideas are described incrementally with increasing level of detail. Thus, section 2 describes why contexts are an important notion to consider as part of our ontology. This is achieved also by comparing contexts with situations, another ontologically very important concept. Section 3 then goes more into the technical details and proposes that contexts should be formalized as particular mathematical objects, namely as logical theories. Reasoning with contexts is then formalized as a set of deductions, each deduction carried out inside a context, connected by appropriate \"bridge rules\". Finally, section 4 describes how an important example of common sense reasoning, reasoning about reasoning, can be formalized as multicontextual reasoning.", "title": "" }, { "docid": "ad2655aaed8a4f3379cb206c6e405f16", "text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.", "title": "" }, { "docid": "98110985cd175f088204db452a152853", "text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.", "title": "" } ]
scidocsrr
9cd13e25c4ba0cf67e8bbe5ba4888481
Traceability and visual analytics for the Internet-of-Things (IoT) architecture
[ { "docid": "5e858796f025a9e2b91109835d827c68", "text": "Several divergent application protocols have been proposed for Internet of Things (IoT) solutions including CoAP, REST, XMPP, AMQP, MQTT, DDS, and others. Each protocol focuses on a specific aspect of IoT communications. The lack of a protocol that can handle the vertical market requirements of IoT applications including machine-to-machine, machine-to-server, and server-to-server communications has resulted in a fragmented market between many protocols. In turn, this fragmentation is a main hindrance in the development of new services that require the integration of multiple IoT services to unlock new capabilities and provide horizontal integration among services. In this work, after articulating the major shortcomings of the current IoT protocols, we outline a rule-based intelligent gateway that bridges the gap between existing IoT protocols to enable the efficient integration of horizontal IoT services. While this intelligent gateway enhances the gloomy picture of protocol fragmentation in the context of IoT, it does not address the root cause of this fragmentation, which lies in the inability of the current protocols to offer a wide range of QoS guarantees. To offer a solution that stems the root cause of this protocol fragmentation issue, we propose a generic IoT protocol that is flexible enough to address the IoT vertical market requirements. In this regard, we enhance the baseline MQTT protocol by allowing it to support rich QoS features by exploiting a mix of IP multicasting, intelligent broker queuing management, and traffic analytics techniques. Our initial evaluation of the lightweight enhanced MQTT protocol reveals significant improvement over the baseline protocol in terms of the delay performance.", "title": "" } ]
[ { "docid": "72fea311bf7519db3d4c081a514709ab", "text": "This paper proposes a frequency-modulation control scheme for a dc/dc current-source parallel-resonant converter with two possible configurations. The basic configuration comprises an external voltage loop, an internal current loop, and a frequency modulator: the voltage loop is responsible for regulating the output voltage, the current loop makes the system controllable and limits the input current, and the modulator provides robustness against variations in resonant component values. The enhanced configuration introduces the output inductor current as a feed-forward term and clearly improves the transient response to fast load changes. The theoretical design of these control schemes is performed systematically by first deriving their small-signal models and second using Bode diagram analysis. The actual performance of the proposed control schemes is experimentally validated by testing on a laboratory prototype.", "title": "" }, { "docid": "d521b14ee04dbf69656240ef47c3319c", "text": "This paper presents a computationally efficient approach for temporal action detection in untrimmed videos that outperforms state-of-the-art methods by a large margin. We exploit the temporal structure of actions by modeling an action as a sequence of sub-actions. A novel and fully automatic sub-action discovery algorithm is proposed, where the number of sub-actions for each action as well as their types are automatically determined from the training videos. We find that the discovered sub-actions are semantically meaningful. To localize an action, an objective function combining appearance, duration and temporal structure of sub-actions is optimized as a shortest path problem in a network flow formulation. A significant benefit of the proposed approach is that it enables real-time action localization (40 fps) in untrimmed videos. We demonstrate state-of-the-art results on THUMOS’14 and MEXaction2 datasets.", "title": "" }, { "docid": "5d154a62b22415cbedd165002853315b", "text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.", "title": "" }, { "docid": "df37fccf8f93d33f5f1e86313b59fe00", "text": "Grape seeds and skins are good sources of phytochemicals such as gallic acid, catechin, and epicatechin and are suitable raw materials for the production of antioxidative dietary supplements. The differences in levels of the major monomeric flavanols and phenolic acids in seeds and skins from grapes of Vitis vinifera varieties Merlot and Chardonnay and in seeds from grapes of Vitis rotundifolia variety Muscadine were determined, and the antioxidant activities of these components were assessed. The contribution of the major monomeric flavonols and phenolic acid to the total antioxidant capacity of grape seeds and skins was also determined. Gallic acid, monomeric catechin, and epicatechin concentrations were 99, 12, and 96 mg/100 g of dry matter (dm) in Muscadine seeds, 15, 358, and 421 mg/100 g of dm in Chardonnay seeds, and 10, 127, and 115 mg/100 g of dm in Merlot seeds, respectively. Concentrations of these three compounds were lower in winery byproduct grape skins than in seeds. These three major phenolic constituents of grape seeds contributed <26% to the antioxidant capacity measured as ORAC on the basis of the corrected concentrations of gallic acid, catechin, and epicatechin in grape byproducts. Peroxyl radical scavenging activities of phenolics present in grape seeds or skins in decreasing order were resveratrol > catechin > epicatechin = gallocatechin > gallic acid = ellagic acid. The results indicated that dimeric, trimeric, oligomeric, or polymeric procyanidins account for most of the superior antioxidant capacity of grape seeds.", "title": "" }, { "docid": "3d401d8d3e6968d847795ccff4646b43", "text": "In spite of growing frequency and sophistication of attacks two factor authentication schemes have seen very limited adoption in the US, and passwords remain the single factor of authentication for most bank and brokerage accounts. Clearly the cost benefit analysis is not as strongly in favor of two factor as we might imagine. Upgrading from passwords to a two factor authentication system usually involves a large engineering effort, a discontinuity of user experience and a hard key management problem. In this paper we describe a system to convert a legacy password authentication server into a two factor system. The existing password system is untouched, but is cascaded with a new server that verifies possession of a smartphone device. No alteration, patching or updates to the legacy system is necessary. There are now two alternative authentication paths: one using passwords alone, and a second using passwords and possession of the trusted device. The bank can leave the password authentication path available while users migrate to the two factor scheme. Once migration is complete the password-only path can be severed. We have implemented the system and carried out two factor authentication against real accounts at several major banks.", "title": "" }, { "docid": "d3519a975a7fd596dd61e46c110947b6", "text": "Impacts of chronic overfishing are evident in population depletions worldwide, yet indirect ecosystem effects induced by predator removal from oceanic food webs remain unpredictable. As abundances of all 11 great sharks that consume other elasmobranchs (rays, skates, and small sharks) fell over the past 35 years, 12 of 14 of these prey species increased in coastal northwest Atlantic ecosystems. Effects of this community restructuring have cascaded downward from the cownose ray, whose enhanced predation on its bay scallop prey was sufficient to terminate a century-long scallop fishery. Analogous top-down effects may be a predictable consequence of eliminating entire functional groups of predators.", "title": "" }, { "docid": "0cecb071d4358e60a113a9815272959f", "text": "Single-cell RNA-Sequencing (scRNA-Seq) has become the most widely used high-throughput method for transcription profiling of individual cells. Systematic errors, including batch effects, have been widely reported as a major challenge in high-throughput technologies. Surprisingly, these issues have received minimal attention in published studies based on scRNA-Seq technology. We examined data from five published studies and found that systematic errors can explain a substantial percentage of observed cell-to-cell expression variability. Specifically, we found that the proportion of genes reported as expressed explains a substantial part of observed variability and that this quantity varies systematically across experimental batches. Furthermore, we found that the implemented experimental designs confounded outcomes of interest with batch effects, a design that can bring into question some of the conclusions of these studies. Finally, we propose a simple experimental design that can ameliorate the effect of theses systematic errors have on downstream results. . CC-BY 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/025528 doi: bioRxiv preprint first posted online Aug. 25, 2015; Single-cell RNA-Sequencing (scRNA-Seq) has become the primary tool for profiling the transcriptomes of hundreds or even thousands of individual cells in parallel. Our experience with high-throughput genomic data in general, is that well thought-out data processing pipelines are essential to produce meaningful downstream results. We expect the same to be true for scRNA-seq data. Here we show that while some tools developed for analyzing bulk RNA-Seq can be used for scRNA-Seq data, such as the mapping and alignment software, other steps in the processing, such as normalization, quality control and quantification, require new methods to account for the additional variability that is specific to this technology. One of the most challenging sources of unwanted variability and systematic error in highthroughput data are what are commonly referred to as batch effects. Given the way that scRNASeq experiments are conducted, there is much room for concern regarding batch effects. Specifically, batch effects occur when cells from one biological group or condition are cultured, captured and sequenced separate from cells in a second condition. Although batch information is not always included in the experimental annotations that are publicly available, one can extract surrogate variables from the raw sequencing (FASTQ) files. Namely, the sequencing instrument used, the run number from the instrument and the flow cell lane. Although the sequencing is unlikely to be a major source of unwanted variability, it serves as a surrogate for other experimental procedures that very likely do have an effect, such as starting material, PCR amplification reagents/conditions, and cell cycle stage of the cells. Here we will refer to the resulting differences induced by different groupings of these sources of variability as batch effects. In a completely confounded study, it is not possible to determine if the biological condition or batch effects are driving the observed variation. In contrast, incorporating biological replicates across in the experimental design and processing the replicates across multiple batches permits observed variation to be attributed to biology or batch effects (Figure 1). To demonstrate the widespread problem of systematic bias, batch effects, and confounded experimental designs in scRNA-Seq studies, we surveyed several published data sets. We discuss the consequences of failing to consider the presence of this unwanted technical variability, and consider new strategies to minimize its impact on scRNA-Seq data. . CC-BY 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/025528 doi: bioRxiv preprint first posted online Aug. 25, 2015;", "title": "" }, { "docid": "0d16b2f41e4285a5b89b31ed16f378a8", "text": "Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a set of non-linear transformations that connects the views. The R-NKTM is learned from 2D projections of dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.", "title": "" }, { "docid": "e4e58d00ffdfcc881c0ea934ca6152f2", "text": "Translating linear temporal logic formulas to automata has proven to be an effective approach for implementing linear-time model-checking, and for obtaining many extensions and improvements to this verification method. On the other hand, for branching temporal logic, automata-theoretic techniques have long been thought to introduce an exponential penalty, making them essentially useless for model-checking. Recently, Bernholtz and Grumberg [1993] have shown that this exponential penalty can be avoided, though they did not match the linear complexity of non-automata-theoretic algorithms. In this paper, we show that alternating tree automata are the key to a comprehensive automata-theoretic framework for branching temporal logics. Not only can they be used to obtain optimal decision procedures, as was shown by Muller et al., but, as we show here, they also make it possible to derive optimal model-checking algorithms. Moreover, the simple combinatorial structure that emerges from the automata-theoretic approach opens up new possibilities for the implementation of branching-time model checking and has enabled us to derive improved space complexity bounds for this long-standing problem.", "title": "" }, { "docid": "8f2b100dac154c54d928928296f830f6", "text": "The RPL routing protocol published in RFC 6550 was designed for efficient and reliable data collection in low-power and lossy networks. Specifically, it constructs a Destination Oriented Directed Acyclic Graph (DODAG) for data forwarding. However, due to the uneven deployment of sensor nodes in large areas, and the heterogeneous traffic patterns in the network, some sensor nodes may have much heavier workload in terms of packets forwarded than others. Such unbalanced workload distribution will result in these sensor nodes quickly exhausting their energy, and therefore shorten the overall network lifetime. In this paper, we propose a load balanced routing protocol based on the RPL protocol, named LB-RPL, to achieve balanced workload distribution in the network. Targeted at the low-power and lossy network environments, LB-RPL detects workload imbalance in a distributed and non-intrusive fashion. In addition, it optimizes the data forwarding path by jointly considering both workload distribution and link-layer communication qualities. We demonstrate the performance superiority of our LB-RPL protocol over original RPL through extensive simulations.", "title": "" }, { "docid": "d529d1052fce64ae05fbc64d2b0450ab", "text": "Today, many industrial companies must face problems raised by maintenance. In particular, the anomaly detection problem is probably one of the most challenging. In this paper we focus on the railway maintenance task and propose to automatically detect anomalies in order to predict in advance potential failures. We first address the problem of characterizing normal behavior. In order to extract interesting patterns, we have developed a method to take into account the contextual criteria associated to railway data (itinerary, weather conditions, etc.). We then measure the compliance of new data, according to extracted knowledge, and provide information about the seriousness and the exact localization of a detected anomaly. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ace86fca63b72cb21676f4e21d983e25", "text": "OBJECTIVE\nThe purpose of this study was to investigate the therapeutic effects of combined bumetanide and applied behavior analysis (ABA) treatment in children with autism.\n\n\nMETHODS\nSixty children diagnosed with autism according to the International Classification of Diseases, Tenth Revision (ICD-10) criteria (mean age of 4.5 years) were randomly divided into two groups: A single treatment group (n=28) and a combined treatment group (n=32). The combined treatment group received ABA training combined with oral bumetanide (0.5 mg twice a day). The single treatment group received ABA training only. Autism symptoms were evaluated with the Autism Behavior Checklist (ABC) and the Childhood Autism Rating Scale (CARS), whereas severity of disease (SI) and global improvement (GI) were measured with the Clinical Global Impressions (CGI). Assessment of ABC, CARS, and CGI was performed immediately before and 3 months after initiation of the treatment(s).\n\n\nRESULTS\nPrior to intervention(s) no statistically significant differences in scores on the ABC, CARS, SI, or GI were found between the two groups. Total scores of the ABC, CARS, and SI were decreased in both groups after 3 months (p<0.05) compared with the scores prior to treatment. The total scores of the ABC and the CGI were significantly (p<0.05) lower in the combined treatment group than in the single treatment group. Although the total and item scores of the CARS in the combined treatment group were lower than in the single treatment group after a 3 month intervention, they did not reach statistical significance. No adverse effects of bumetanide were observed.\n\n\nCONCLUSIONS\nTreatment with bumetanide combined with ABA training may result in a better outcome in children with autism than ABA training alone.", "title": "" }, { "docid": "08cfbb5cc4540af3f67db65740e28bd1", "text": "The amount of contextual data collected, stored, mined, and shared is increasing exponentially. Street cameras, credit card transactions, chat and Twitter logs, e-mail, web site visits, phone logs and recordings, social networking sites, all are examples of data that persists in a manner not under individual control, leading some to declare the death of privacy. We argue here that the ability to generate convincing fake contextual data can be a basic tool in the fight to preserve privacy. One use for the technology is for an individual to make his actual data indistinguishable amongst a pile of false data.\n In this paper we consider two examples of contextual data, search engine query data and location data. We describe the current state of faking these types of data and our own efforts in this direction.", "title": "" }, { "docid": "240ccc61e20b5692e348ce17c3621c1e", "text": "We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actorcritic methods, which can be viewed performing approximate inference on the corresponding energy-based model.", "title": "" }, { "docid": "1413a01f3c50ff5dfcfbeababfcd267c", "text": "Early studies indicated that teachersâ€TM enacted beliefs, particularly in terms of classroom technology practices, often did not align with their espoused beliefs. Researchers concluded this was due, at least in part, to a variety of external barriers that prevented teachers from using technology in ways that aligned more closely with their beliefs. However, many of these barriers (access, support, etc.) have since been eliminated in the majority of schools. This multiple case-study research was designed to revisit the question, “How do the pedagogical beliefs and classroom technology practices of teachers, recognized for their technology uses, align?â€​ Twelve K-12 classroom teachers were purposefully selected based on their awardwinning technology practices, supported by evidence from personal and/or classroom websites. Follow-up interviews were conducted to examine the correspondence between teachersâ€TM classroom practices and their pedagogical beliefs. Results a c Purchase Export", "title": "" }, { "docid": "fa7da02d554957f92364d4b37219feba", "text": "This paper shows mechanisms for artificial finger based on a planetary gear system (PGS). Using the PGS as a transmitter provides an under-actuated system for driving three joints of a finger with back-drivability that is crucial characteristics for fingers as an end-effector when it interacts with external environment. This paper also shows the artificial finger employed with the originally developed mechanism called “double planetary gear system” (DPGS). The DPGS provides not only back-drivable and under-actuated flexion-extension of the three joints of a finger, which is identical to the former, but also adduction-abduction of the MP joint. Both of the above finger mechanisms are inherently safe due to being back-drivable with no electric device or sensor in the finger part. They are also rigorously solvable in kinematics and kinetics as shown in this paper.", "title": "" }, { "docid": "1eb2aaf3e7b2f98e84105405b123fa7e", "text": "Prognostics technique aims to accurately estimate the Remaining Useful Life (RUL) of a subsystem or a component using sensor data, which has many real world applications. However, many of the existing algorithms are based on linear models, which cannot capture the complex relationship between the sensor data and RUL. Although Multilayer Perceptron (MLP) has been applied to predict RUL, it cannot learn salient features automatically, because of its network structure. A novel deep Convolutional Neural Network (CNN) based regression approach for estimating the RUL is proposed in this paper. Although CNN has been applied on tasks such as computer vision, natural language processing, speech recognition etc., this is the first attempt to adopt CNN for RUL estimation in prognostics. Different from the existing CNN structure for computer vision, the convolution and pooling filters in our approach are applied along the temporal dimension over the multi-channel sensor data to incorporate automated feature learning from raw sensor signals in a systematic way. Through the deep architecture, the learned features are the higher-level abstract representation of low-level raw sensor signals. Furthermore, feature learning and RUL estimation are mutually enhanced by the supervised feedback. We compared with several state-of-the-art algorithms on two publicly available data sets to evaluate the effectiveness of this proposed approach. The encouraging results demonstrate that our proposed deep convolutional neural network based regression approach for RUL estimation is not only more efficient but also more accurate.", "title": "" }, { "docid": "425ee0a0dc813a3870af72ac02ea8bbc", "text": "Although the mechanism of action of botulinum toxin (BTX) has been intensively studied, many unanswered questions remain regarding the composition and clinical properties of the two formulations of BTX currently approved for cosmetic use. In the first half of this review, these questions are explored in detail, with emphasis on the most pertinent and revelatory studies in the literature. The second half delineates most of the common and some not so common uses of BTX in the face and neck, stressing important patient selection and safety considerations. Complications from neurotoxins at cosmetic doses are generally rare and usually technique dependent.", "title": "" }, { "docid": "5a99af400ea048d34ee961ad7f3e3bf6", "text": "Breast cancer is becoming pervasive with each passing day. Hence, its early detection is a big step in saving life of any patient. Mammography is a common tool in breast cancer diagnosis. The most important step here is classification of mammogram patches as normal-abnormal and benign-malignant. Texture of a breast in a mammogram patch plays a big role in these classifications. We propose a new feature extraction descriptor called Histogram of Oriented Texture (HOT), which is a combination of Histogram of Gradients (HOG) and a Gabor filter, and exploits this fact. We also revisit the Pass Band Discrete Cosine Transform (PB-DCT) descriptor that captures texture information well. All features of a mammogram patch may not be useful. Hence, we apply a feature selection technique called Discrimination Potentiality (DP). Our resulting descriptors, DP-HOT and DP-PB-DCT, are compared with the standard descriptors. Density of a mammogram patch is important for classification, and has not been studied exhaustively. The Image Retrieval in Medical Application (IRMA) database from RWTH Aachen, Germany is a standard database that provides mammogram patches, and most researchers have tested their frameworks only on a subset of patches from this database. We apply our two new descriptors on all images of the IRMA database for density wise classification, and compare with the standard descriptors. We achieve higher accuracy than all of the existing standard descriptors (more than 92% ).", "title": "" } ]
scidocsrr
ad57643ecac12a7516a07a7210750e0f
Person Re-identification by Attributes
[ { "docid": "dbe5661d99798b24856c61b93ddb2392", "text": "Traditionally, appearance models for recognition, reacquisition and tracking problems have been evaluated independently using metrics applied to a complete system. It is shown that appearance models for these three problems can be evaluated using a cumulative matching curve on a standardized dataset, and that this one curve can be converted to a synthetic reacquisition or disambiguation rate for tracking. A challenging new dataset for viewpoint invariant pedestrian recognition (VIPeR) is provided as an example. This dataset contains 632 pedestrian image pairs from arbitrary viewpoints. Several baseline methods are tested on this dataset and the results are presented as a benchmark for future appearance models and matchin methods.", "title": "" }, { "docid": "e5d523d8a1f584421dab2eeb269cd303", "text": "In this paper, we propose a novel appearance-based method for person re-identification, that condenses a set of frames of the same individual into a highly informative signature, called Histogram Plus Epitome, HPE. It incorporates complementary global and local statistical descriptions of the human appearance, focusing on the overall chromatic content, via histograms representation, and on the presence of recurrent local patches, via epitome estimation. The matching of HPEs provides optimal performances against low resolution, occlusions, pose and illumination variations, defining novel state-of-the-art results on all the datasets considered.", "title": "" }, { "docid": "c80222e5a7dfe420d16e10b45f8fab66", "text": "Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.", "title": "" }, { "docid": "fbc47f2d625755bda6d9aa37805b69f1", "text": "In many surveillance applications it is desirable to determine if a given individual has been previously observed over a network of cameras. This is the person reidentification problem. This paper focuses on reidentification algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Person reidentification approaches have two aspects: (i) establish correspondence between parts, and (ii) generate signatures that are invariant to variations in illumination, pose, and the dynamic appearance of clothing. A novel spatiotemporal segmentation algorithm is employed to generate salient edgels that are robust to changes in appearance of clothing. The invariant signatures are generated by combining normalized color and salient edgel histograms. Two approaches are proposed to generate correspondences: (i) a model based approach that fits an articulated model to each individual to establish a correspondence map, and (ii) an interest point operator approach that nominates a large number of potential correspondences which are evaluated using a region growing scheme. Finally, the approaches are evaluated on a 44 person database across 3 disparate views.", "title": "" } ]
[ { "docid": "c4f0e371ea3950e601f76f8d34b736e3", "text": "Discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks. Its main goal is to transform a set of continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. In this manner, symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified, making it more concise and specific. The literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found. However, in previous papers, there is a lack of consensus in the definition of the properties and no formal categorization has been established yet, which may be confusing for practitioners. Furthermore, only a small set of discretizers have been widely considered, while many other methods have gone unnoticed. With the intention of alleviating these problems, this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective. From the theoretical perspective, we develop a taxonomy based on the main properties pointed out in previous research, unifying the notation and including all the known methods up to date. Empirically, we conduct an experimental study in supervised classification involving the most representative and newest discretizers, different types of classifiers, and a large number of data sets. The results of their performances measured in terms of accuracy, number of intervals, and inconsistency have been verified by means of nonparametric statistical tests. Additionally, a set of discretizers are highlighted as the best performing ones.", "title": "" }, { "docid": "f35dc45e28f2483d5ac66271590b365d", "text": "We present a vector space–based model for selectional preferences that predicts plausibility scores for argument headwords. It does not require any lexical resources (such as WordNet). It can be trained either on one corpus with syntactic annotation, or on a combination of a small semantically annotated primary corpus and a large, syntactically analyzed generalization corpus. Our model is able to predict inverse selectional preferences, that is, plausibility scores for predicates given argument heads. We evaluate our model on one NLP task (pseudo-disambiguation) and one cognitive task (prediction of human plausibility judgments), gauging the influence of different parameters and comparing our model against other model classes. We obtain consistent benefits from using the disambiguation and semantic role information provided by a semantically tagged primary corpus. As for parameters, we identify settings that yield good performance across a range of experimental conditions. However, frequency remains a major influence of prediction quality, and we also identify more robust parameter settings suitable for applications with many infrequent items.", "title": "" }, { "docid": "ad88d2e2213624270328be0aa019b5cd", "text": "The traditional decision-making framework for newsvendor models is to assume a distribution of the underlying demand. However, the resulting optimal policy is typically sensitive to the choice of the distribution. A more conservative approach is to assume that the distribution belongs to a set parameterized by a few known moments. An ambiguity-averse newsvendor would choose to maximize the worst-case profit. Most models of this type assume that only the mean and the variance are known, but do not attempt to include asymmetry properties of the distribution. Other recent models address asymmetry by including skewness and kurtosis. However, closed-form expressions on the optimal bounds are difficult to find for such models. In this paper, we propose a framework under which the expectation of a piecewise linear objective function is optimized over a set of distributions with known asymmetry properties. This asymmetry is represented by the first two moments of multiple random variables that result from partitioning the original distribution. In the simplest case, this reduces to semivariance. The optimal bounds can be solved through a second-order cone programming (SOCP) problem. This framework can be applied to the risk-averse and risk-neutral newsvendor problems and option pricing. We provide a closed-form expression for the worst-case newsvendor profit with only mean, variance and semivariance information.", "title": "" }, { "docid": "9076428e840f37860a395b46445c22c8", "text": "Embedded First-In First-Out (FIFO) memories are increasingly used in many IC designs. We have created a new full-custom embedded ripple-through FIFO module with asynchronous read and write clocks. The implementation is based on a micropipeline architecture and is at least a factor two smaller than SRAM-based and standard-cell-based counterparts. This paper gives an overview of the most important design features of the new FIFO module and describes its test and design-for-test approach.", "title": "" }, { "docid": "4fd78d1f9737ad996a2e3b4495e911c6", "text": "The accuracy of Wrst impressions was examined by investigating judged construct (negative aVect, positive aVect, the Big Wve personality variables, intelligence), exposure time (5, 20, 45, 60, and 300 s), and slice location (beginning, middle, end). Three hundred and thirty four judges rated 30 targets. Accuracy was deWned as the correlation between a judge’s ratings and the target’s criterion scores on the same construct. Negative aVect, extraversion, conscientiousness, and intelligence were judged moderately well after 5-s exposures; however, positive aVect, neuroticism, openness, and agreeableness required more exposure time to achieve similar levels of accuracy. Overall, accuracy increased with exposure time, judgments based on later segments of the 5-min interactions were more accurate, and 60 s yielded the optimal ratio between accuracy and slice length. Results suggest that accuracy of Wrst impressions depends on the type of judgment made, amount of exposure, and temporal location of the slice of judged social behavior. © 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "cc96a29f9c2ad0bcbeb52b2dc4c96996", "text": "The authors model the neural mechanisms underlying spatial cognition, integrating neuronal systems and behavioral data, and address the relationships between long-term memory, short-term memory, and imagery, and between egocentric and allocentric and visual and ideothetic representations. Long-term spatial memory is modeled as attractor dynamics within medial-temporal allocentric representations, and short-term memory is modeled as egocentric parietal representations driven by perception, retrieval, and imagery and modulated by directed attention. Both encoding and retrieval/imagery require translation between egocentric and allocentric representations, which are mediated by posterior parietal and retrosplenial areas and the use of head direction representations in Papez's circuit. Thus, the hippocampus effectively indexes information by real or imagined location, whereas Papez's circuit translates to imagery or from perception according to the direction of view. Modulation of this translation by motor efference allows spatial updating of representations, whereas prefrontal simulated motor efference allows mental exploration. The alternating temporal-parietal flows of information are organized by the theta rhythm. Simulations demonstrate the retrieval and updating of familiar spatial scenes, hemispatial neglect in memory, and the effects on hippocampal place cell firing of lesioned head direction representations and of conflicting visual and ideothetic inputs.", "title": "" }, { "docid": "8531a633a78f65161c3793c89a3eb093", "text": "Mindfulness-based approaches are increasingly employed as interventions for treating a variety of psychological, psychiatric and physical problems. Such approaches include ancient Buddhist mindfulness meditations such as Vipassana and Zen meditations, modern group-based standardized meditations, such as mindfulness-based stress reduction and mindfulness-based cognitive therapy, and further psychological interventions, such as dialectical behavioral therapy and acceptance and commitment therapy. We review commonalities and differences of these interventions regarding philosophical background, main techniques, aims, outcomes, neurobiology and psychological mechanisms. In sum, the currently applied mindfulness-based interventions show large differences in the way mindfulness is conceptualized and practiced. The decision to consider such practices as unitary or as distinct phenomena will probably influence the direction of future research.", "title": "" }, { "docid": "ec788f48207b0a001810e1eabf6b2312", "text": "Maximum likelihood factor analysis provides an effective method for estimation of factor matrices and a useful test statistic in the likelihood ratio for rejection of overly simple factor models. A reliability coefficient is proposed to indicate quality of representation of interrelations among attributes in a battery by a maximum likelihood factor analysis. Usually, for a large sample of individuals or objects, the likelihood ratio statistic could indicate that an otherwise acceptable factor model does not exactly represent the interrelations among the attributes for a population. The reliability coefficient could indicate a very close representation in this case and be a better indication as to whether to accept or reject the factor solution.", "title": "" }, { "docid": "4c588e5f05c3e4c2f3b974306095af02", "text": "Software Development Life Cycle (SDLC) is a model which provides us the basic information about the methods/techniques to develop software. It is concerned with the software management processes that examine the area of software development through the development models, which are known as software development life cycle. There are many development models namely Waterfall model, Iterative model, V-shaped model, Spiral model, Extreme programming, Iterative and Incremental Method, Rapid prototyping model and Big Bang Model. This is paper is concerned with the study of these different software development models and to compare their advantages and disadvantages.", "title": "" }, { "docid": "2f7d40aa2b6f2986ab4a48eec757036a", "text": "Patients ask for procedures with long-lasting effects. ArteFill is the first permanent injectable approved in 2006 by the FDA for nasolabial folds. It consists of cleaned microspheres of polymethylmethacrylate (PMMA) suspended in bovine collagen. Over the development period of 20 years most of its side effects have been eliminated to achieve the same safety standard as today’s hyaluronic acid products. A 5-year follow-up study in U.S. clinical trial patients has shown the same wrinkle improvement as seen at 6 months. Long-term follow-up in European Artecoll patients has shown successful wrinkle correction lasting up to 15 years. A wide variety of off-label indications and applications have been developed that help the physician meet the individual needs of his/her patients. Serious complications after ArteFill injections, such as granuloma formation, have not been reported due to the reduction of PMMA microspheres smaller than 20 μm to less than 1% “by the number.” Minor technique-related side effects, however, may occur during the initial learning curve. Patient and physician satisfaction with ArteFill has been shown to be greater than 90%.", "title": "" }, { "docid": "8db37f6f495a68da176e1ed411ce37a7", "text": "We present Bolt, a data management system for an emerging class of applications—those that manipulate data from connected devices in the home. It abstracts this data as a stream of time-tag-value records, with arbitrary, application-defined tags. For reliable sharing among applications, some of which may be running outside the home, Bolt uses untrusted cloud storage as seamless extension of local storage. It organizes data into chunks that contains multiple records and are individually compressed and encrypted. While chunking enables efficient transfer and storage, it also implies that data is retrieved at the granularity of chunks, instead of records. We show that the resulting overhead, however, is small because applications in this domain frequently query for multiple proximate records. We develop three diverse applications on top of Bolt and find that the performance needs of each are easily met. We also find that compared to OpenTSDB, a popular time-series database system, Bolt is up to 40 times faster than OpenTSDB while requiring 3–5 times less storage space.", "title": "" }, { "docid": "11538da6cfda3a81a7ddec0891aae1d9", "text": "This work presents a dataset and annotation scheme for the new task of identifying “good” conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations. We develop a taxonomy to reflect features of entire threads and individual comments which we believe contribute to identifying ERICs; code a novel dataset of Yahoo News comment threads (2.4k threads and 10k comments) and 1k threads from the Internet Argument Corpus; and analyze the features characteristic of ERICs. This is one of the largest annotated corpora of online human dialogues, with the most detailed set of annotations. It will be valuable for identifying ERICs and other aspects of argumentation, dialogue, and discourse.", "title": "" }, { "docid": "a9975365f0bad734b77b67f63bdf7356", "text": "Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.", "title": "" }, { "docid": "dd40063dd10027f827a65976261c8683", "text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.", "title": "" }, { "docid": "d51ef75ccf464cc03656210ec500db44", "text": "The choice of a business process modelling (BPM) tool in combination with the selection of a modelling language is one of the crucial steps in BPM project preparation. Different aspects influence the decision: tool functionality, price, modelling language support, etc. In this paper we discuss the aspect of usability, which has already been recognized as an important topic in software engineering and web design. We conduct a literature review to find out the current state of research on the usability in the BPM field. The results of the literature review show, that although a number of research papers mention the importance of usability for BPM tools, real usability evaluation studies have rarely been undertaken. Based on the results of the literature analysis, the possible research directions in the field of usability of BPM tools are suggested.", "title": "" }, { "docid": "5e8e39cb778e86b24d6ceee6419dd333", "text": "The nature of healthcare processes in a multidisciplinary hospital is inherently complex. In this paper, we identify particular problems of modeling healthcare processes with the de-facto standard process modeling language BPMN. We discuss all possibilities of BPMN adressing these problems. Where plain BPMN fails to produce nice and easily comprehensible results, we propose a new approach: Encorporating role information in process models using the color attribute of tasks complementary to the usage of lanes.", "title": "" }, { "docid": "de22f2f15cc427b50d4018e8c44df7e4", "text": "In this paper we examine challenges identified with participatory design research in the developing world and develop the postcolonial notion of cultural hybridity as a sensitizing concept. While participatory design intentionally addresses power relationships, its methodology does not to the same degree cover cultural power relationships, which extend beyond structural power and voice. The notion of cultural hybridity challenges the static cultural binary opposition between the self and the other, Western and non-Western, or the designer and the user---offering a more nuanced approach to understanding the malleable nature of culture. Drawing from our analysis of published literature in the participatory design community, we explore the complex relationship of participatory design to international development projects and introduce postcolonial cultural hybridity via postcolonial theory and its application within technology design thus far. Then, we examine how participatory approaches and cultural hybridity may interact in practice and conclude with a set of sensitizing insights and topics for further discussion in the participatory design community.", "title": "" }, { "docid": "2e6081fc296fbe22c97d1997a77093f6", "text": "Despite the security community's best effort, the number of serious vulnerabilities discovered in software is increasing rapidly. In theory, security audits should find and remove the vulnerabilities before the code ever gets deployed. However, due to the enormous amount of code being produced, as well as a the lack of manpower and expertise, not all code is sufficiently audited. Thus, many vulnerabilities slip into production systems. A best-practice approach is to use a code metric analysis tool, such as Flawfinder, to flag potentially dangerous code so that it can receive special attention. However, because these tools have a very high false-positive rate, the manual effort needed to find vulnerabilities remains overwhelming. In this paper, we present a new method of finding potentially dangerous code in code repositories with a significantly lower false-positive rate than comparable systems. We combine code-metric analysis with metadata gathered from code repositories to help code review teams prioritize their work. The paper makes three contributions. First, we conducted the first large-scale mapping of CVEs to GitHub commits in order to create a vulnerable commit database. Second, based on this database, we trained a SVM classifier to flag suspicious commits. Compared to Flawfinder, our approach reduces the amount of false alarms by over 99 % at the same level of recall. Finally, we present a thorough quantitative and qualitative analysis of our approach and discuss lessons learned from the results. We will share the database as a benchmark for future research and will also provide our analysis tool as a web service.", "title": "" }, { "docid": "03422659c355a0e9385957768ee1629e", "text": "Recent research has resulted in the creation of many fact extraction systems. To be able to utilize the extracted facts to their full potential, it is essential to understand their semantics. Placing these extracted facts in an ontology is an effective way to provide structure, which facilitates better understanding of semantics. Today there are many systems that extract facts and organize them in an ontology, namely DBpedia, NELL, YAGO etc. While such ontologies are used in a variety of applications, including IBM’s Jeopardy-winning Watson system, they demand significant effort in their creation. They are either manually curated, or built using semi-supervised machine learning techniques. As the effort in the creation of an ontology is significant, it is often hard to organize facts extracted from a corpus of documents that is different from the one used to build these ontologies in the first place. The goal of this work is to be able to automatically construct ontologies, for a given set of entities, properties and relations. One key source of this data is the Wikipedia tables dataset. Wikipedia tables are unique in that they are a large (1.4 million) and heterogeneous set of tables that can be extracted at very high levels of precision. Rather than augmenting an existing ontology, which is a very challenging research problem in itself, I propose to automatically construct a new ontology by utilizing representations of entities, their attributes and relations. These representations will be learnt using unsupervised machine learning techniques on facts extracted from Wikipedia tables. Thus, the end system will not only extract facts from Wikipedia tables, but also automatically organize them in an Ontology to understand the semantics of Wikipedia tables better.", "title": "" }, { "docid": "57ff69385a6b8202b02bf4c03d7dd78b", "text": "bstract In this paper, we present a survey on the application of recurrent neural networks to the task of statistical language modeling. lthough it has been shown that these models obtain good performance on this task, often superior to other state-of-the-art techniques, hey suffer from some important drawbacks, including a very long training time and limitations on the number of context words hat can be taken into account in practice. Recent extensions to recurrent neural network models have been developed in an attempt o address these drawbacks. This paper gives an overview of the most important extensions. Each technique is described and its erformance on statistical language modeling, as described in the existing literature, is discussed. Our structured overview makes t possible to detect the most promising techniques in the field of recurrent neural networks, applied to language modeling, but it lso highlights the techniques for which further research is required. 2014 Published by Elsevier Ltd.", "title": "" } ]
scidocsrr
38aecd01ffac1c77b9ea75d1da11e45f
E-WOM from e-commerce websites and social media: Which will consumers adopt?
[ { "docid": "a35a564a2f0e16a21e0ef5e26601eab9", "text": "The social media revolution has created a dynamic shift in the digital marketing landscape. The voice of influence is moving from traditional marketers towards consumers through online social interactions. In this study, we focus on two types of online social interactions, namely, electronic word of mouth (eWOM) and observational learning (OL), and explore how they influence consumer purchase decisions. We also examine how receiver characteristics, consumer expertise and consumer involvement, moderate consumer purchase decision process. Analyzing panel data collected from a popular online beauty forum, we found that consumer purchase decisions are influenced by their online social interactions with others and that action-based OL information is more influential than opinion-based eWOM. Further, our results show that both consumer expertise and consumer involvement play an important moderating role, albeit in opposite direction: Whereas consumer expertise exerts a negative moderating effect, consumer involvement is found to have a positive moderating effect. The study makes important contributions to research and practice.", "title": "" }, { "docid": "beb365aacc5f66eea05d8aaebf97f275", "text": "In this paper, we study the effects of three different kinds of search engine rankings on consumer behavior and search engine revenues: direct ranking effect, interaction effect between ranking and product ratings, and personalized ranking effect. We combine a hierarchical Bayesian model estimated on approximately one million online sessions from Travelocity, together with randomized experiments using a real-world hotel search engine application. Our archival data analysis and randomized experiments are consistent in demonstrating the following: (1) a consumer utility-based ranking mechanism can lead to a significant increase in overall search engine revenue. (2) Significant interplay occurs between search engine ranking and product ratings. An inferior position on the search engine affects “higher-class” hotels more adversely. On the other hand, hotels with a lower customer rating are more likely to benefit from being placed on the top of the screen. These findings illustrate that product search engines could benefit from directly incorporating signals from social media into their ranking algorithms. (3) Our randomized experiments also reveal that an “active” (wherein users can interact with and customize the ranking algorithm) personalized ranking system leads to higher clicks but lower purchase propensities and lower search engine revenue compared to a “passive” (wherein users cannot interact with the ranking algorithm) personalized ranking system. This result suggests that providing more information during the decision-making process may lead to fewer consumer purchases because of information overload. Therefore, product search engines should not adopt personalized ranking systems by default. Overall, our study unravels the economic impact of ranking and its interaction with social media on product search engines.", "title": "" }, { "docid": "1993b540ff91922d381128e9c8592163", "text": "The use of the WWW as a venue for voicing opinions, complaints and recommendations on products and firms has been widely reported in the popular media. However little is known how consumers use these reviews and if they subsequently have any influence on evaluations and purchase intentions of products and retailers. This study examines the effect of negative reviews on retailer evaluation and patronage intention given that the consumer has already made a product/brand decision. Our results indicate that the extent of WOM search depends on the consumer’s reasons for choosing an online retailer. Further the influence of negative WOM information on perceived reliability and purchase intentions is determined largely by familiarity with the retailer and differs based on whether the retailer is a pure-Internet or clicks-and-mortar firm. Managerial implications for positioning strategies to minimize the effect of negative word-ofmouth have been discussed.", "title": "" }, { "docid": "26a60d17d524425cfcfa92838ef8ea06", "text": "This paper develops and tests a model of consumer trust in an electronic commerce vendor. Building consumer trust is a strategic imperative for web-based vendors because trust strongly influences consumer intentions to transact with unfamiliar vendors via the web. Trust allows consumers to overcome perceptions of risk and uncertainty, and to engage in the following three behaviors that are critical to the realization of a web-based vendor’s strategic objectives: following advice offered by the web vendor, sharing personal information with the vendor, and purchasing from the vendor’s web site. Trust in the vendor is defined as a multi-dimensional construct with two inter-related components—trusting beliefs (perceptions of the competence, benevolence, and integrity of the vendor), and trusting intentions—willingness to depend (that is, a decision to make oneself vulnerable to the vendor). Three factors are proposed for building consumer trust in the vendor: structural assurance (that is, consumer perceptions of the safety of the web environment), perceived web vendor reputation, and perceived web site quality. The model is tested in the context of a hypothetical web site offering legal advice. All three factors significantly influenced consumer trust in the web vendor. That is, these factors, especially web site quality and reputation, are powerful levers that vendors can use to build consumer trust, in order to overcome the negative perceptions people often have about the safety of the web environment. The study also demonstrates that perceived Internet risk negatively affects consumer intentions to transact with a web-based vendor. q 2002 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "8f7e80051ab9e1157e94fefe2b3be372", "text": "Excellent work ([1]-[6]) has shown that memory management and transaction concurrency levels can often be tuned automatically by the database management systems. Other excellent work ([7]]-[14]) has shown how to use the optimizer to do automatic physical design or to make the optimizer itself more self-adaptive ([15]-[17]). Our performance tuning experience across various industries (finance, gaming, data warehouses, and travel) has shown that enormous additional tuning benefits (sometimes amounting to orders of magnitude) can come from reengineering application code and table design. The question is: can a tool help in this effort? We believe so. We present a tool called AppSleuth that parses application code and the tracing log for two popular database management systems in order to lead a competent tuner to the hot spots in an application. This paper discusses (i) representative application \"delinquent design patterns\", (ii) an application code parser to find them, (iii) a log parser to identify the patterns that are critical, and (iv) a display to give a global view of the issue. We present an extended sanitized case study from a real travel application to show the results of the tool at different stages of a tuning engagement, yielding a 300 fold improvement. This is the first tool of its kind that we know of.", "title": "" }, { "docid": "b37466186264c058c3b6941232ba02dc", "text": "The present study examined generational differences in the patterns and predictors of formal and informal mental health service utilization among a nationally representative sample of 1850 Asian Americans from the National Latino and Asian American Study. We focused on the effects of perceived need and relational factors on service utilization among 1st-, 1.5-, and 2nd-generation Asian Americans. Results of hierarchical logistic regression showed significant intergenerational differences. Specifically, 1.5-generation Asian Americans exhibited distinctive pattern of service use, with perceived need being associated with a higher likelihood of using formal mental health services, but only for those with high level of social support. First- and second-generation Asian Americans, on the other hand, perceived need was independently associated with formal service use, and a significant predictor of informal service use for first generation. Greater family conflict was also associated with greater use of formal and informal services for both first- and second generations. However, family cohesion was associated with only informal service use among first -generation Asian Americans. Implications for mental health service policy were discussed.", "title": "" }, { "docid": "3a81f0fc24dd90f6c35c47e60db3daa4", "text": "Advances in information and Web technologies have open numerous opportunities for online retailing. The pervasiveness of the Internet coupled with the keenness in competition among online retailers has led to virtual experiential marketing (VEM). This study examines the relationship of five VEM elements on customer browse and purchase intentions and loyalty, and the moderating effects of shopping orientation and Internet experience on these relationships. A survey was conducted of customers who frequently visited two online game stores to play two popular games in Taiwan. The results suggest that of the five VEM elements, three have positive effects on browse intention, and two on purchase intentions. Both browse and purchase intentions have positive effects on customer loyalty. Economic orientation was found to moderate that relationships between the VEM elements and browse and purchase intentions. However, convenience orientation moderated only the relationships between the VEM elements and browse intention.", "title": "" }, { "docid": "707f3e6c008e0ac5e92eaef13e3a5302", "text": "Data is the most valuable asset companies are proud of. When its quality degrades, the consequences are unpredictable, can lead to complete wrong insights. In Big Data context, evaluating the data quality is challenging, must be done prior to any Big data analytics by providing some data quality confidence. Given the huge data size, its fast generation, it requires mechanisms, strategies to evaluate, assess data quality in a fast, efficient way. However, checking the Quality of Big Data is a very costly process if it is applied on the entire data. In this paper, we propose an efficient data quality evaluation scheme by applying sampling strategies on Big data sets. The Sampling will reduce the data size to a representative population samples for fast quality evaluation. The evaluation targeted some data quality dimensions like completeness, consistency. The experimentations have been conducted on Sleep disorder's data set by applying Big data bootstrap sampling techniques. The results showed that the mean quality score of samples is representative for the original data, illustrate the importance of sampling to reduce computing costs when Big data quality evaluation is concerned. We applied the Quality results generated as quality proposals on the original data to increase its quality.", "title": "" }, { "docid": "b66fcc8a9239dadb4fcc8f6ee676529d", "text": "—The study investigates into the fast fashion industry worldwide, specifically on Zara, H&M and UNIQLO with respect to efficient supply chain management, scarce value creation, low costs promotions and positioning strategy, supported by comparisons between several typical well-known fast fashion brands. Through the overall analysis of B2C apparel online retailing in China, statistics show an enormous space for online retailing fast fashion industry to explore but a far way to catch up with the leading enterprises in the world in terms of e-commerce scale. The next main part demonstrates a case of a Chinese fast fashion online retailer-Vancl, analyzing its keys to success in aspects of proper product positioning, brand positioning, business mode, marketing strategy, products and services, user experience, logistics and team management. In addition, relevant suggestions for further prosperity are proposed in the end of the paper.", "title": "" }, { "docid": "97a6a77cfa356636e11e02ffe6fc0121", "text": "© 2019 Muhammad Burhan Hafez et al., published by De Gruyter. This work is licensed under the Creative CommonsAttribution-NonCommercial-NoDerivs4.0License. Paladyn, J. Behav. Robot. 2019; 10:14–29 Research Article Open Access Muhammad Burhan Hafez*, Cornelius Weber, Matthias Kerzel, and Stefan Wermter Deep intrinsically motivated continuous actor-critic for eflcient robotic visuomotor skill learning https://doi.org/10.1515/pjbr-2019-0005 Received June 6, 2018; accepted October 29, 2018 Abstract: In this paper, we present a new intrinsically motivated actor-critic algorithm for learning continuous motor skills directly from raw visual input. Our neural architecture is composed of a critic and an actor network. Both networks receive the hidden representation of a deep convolutional autoencoder which is trained to reconstruct the visual input, while the centre-most hidden representation is also optimized to estimate the state value. Separately, an ensemble of predictive world models generates, based on its learning progress, an intrinsic reward signal which is combined with the extrinsic reward to guide the exploration of the actor-critic learner. Our approach is more data-efficient and inherently more stable than the existing actor-critic methods for continuous control from pixel data. We evaluate our algorithm for the task of learning robotic reaching and grasping skills on a realistic physics simulator and on a humanoid robot. The results show that the control policies learnedwith our approach can achieve better performance than the compared state-of-the-art and baseline algorithms in both dense-reward and challenging sparse-reward settings.", "title": "" }, { "docid": "c3ad915ac57bf56c4adc47acee816b54", "text": "How does the brain “produce” conscious subjective experience, an awareness of something? This question has been regarded as perhaps the most challenging one facing science. Penfield et al. [9] had produced maps of whereresponses to electrical stimulation of cerebral cortex could be obtained in human neurosurgical patients. Mapping of cerebral activations in various subjective paradigms has been greatly extended more recently by utilizing PET scan and fMRI techniques. But there were virtually no studies of what the appropriate neurons do in order to elicit a conscious experience. The opportunity for me to attempt such studies arose when my friend and neurosurgeon colleague, Bertram Feinstein, invited me to utilize the opportunity presented by access to stimulating and recording electrodes placed for therapeutic purposes intracranially in awake and responsive patients. With the availability of an excellent facility and team of co-workers, I decided to study neuronal activity requirements for eliciting a simple conscious somatosensory experience, and compare that to activity requirements forunconsciousdetection of sensory signals. We discovered that a surprising duration of appropriate neuronal activations, up to about 500 msec, was required in order to elicit a conscious sensory experience [5]. This was true not only when the initiating stimulus was in any of the cerebral somatosensory pathways; several lines of evidence indicated that even a single stimulus pulse to the skin required similar durations of activities at the cortical level. That discovery led to further studies of such a delay factor for awareness generally, and to profound inferences for the nature of conscious subjective experience. It formed the basis of that highlight in my work [1,3]. For example, a neuronal requirement of about 500 msec to produce awareness meant that we do not experience our sensory world immediately, in real time. But that would contradict our intuitive feeling of the experience in real time. We solved this paradox with a hypothesis for “backward referral” of subjective experience to the time of the first cortical response, the primary evoked potential. This was tested and confirmed experimentally [8], a thrilling result. We could now add subjective referral in time to the already known subjective referral in space. Subjective referrals have no known neural basis and appear to be purely mental phenomena! Another experimental study supported my “time-on” theory for eliciting conscious sensations as opposed to unconscious detection [7]. The time-factor appeared also in an endogenous experience, the conscious intention or will to produce a purely voluntary act [4,6]. In this, we found that cerebral activity initiates this volitional process at least 350 msec before the conscious wish (W) to act appears. However, W appears about 200 msec before the muscles are activated. That retained the possibility that the conscious will could control the outcome of the volitional process; it could veto it and block the performance of the act. These discoveries have profound implications for the nature of free will, for individual responsibility and guilt. Discovery of these time factors led to unexpected ways of viewing conscious experience and unconscious mental functions. Experience of the sensory world is delayed. It raised the possibility that all conscious mental functions are initiated unconsciouslyand become conscious only if neuronal activities persist for a sufficiently long time. Conscious experiences must be discontinuousif there is a delay for each; the “stream of consciousness” must be modified. Quick actions or responses, whether in reaction times, sports activities, etc., would all be initially unconscious. Unconscious mental operations, as in creative thinking, artistic impulses, production of speech, performing in music, etc., can all proceed rapidly, since only brief neural actions are sufficient. Rapid unconscious events would allow faster processing in thinking, etc. The delay for awareness provides a physiological opportunity for modulatory influences to affect the content of an experience that finally appears, as in Freudian repression of certain sensory images or thoughts [2,3]. The discovery of the neural time factor (except in conscious will) could not have been made without intracranial access to the neural pathways. They provided an experimentally based entry into how new hypotheses, of how the brain deals with conscious experience, could be directly tested. That was in contrast to the many philosophical approaches which were speculative and mostly untestable. Evidence based views could now be accepted with some confidence.", "title": "" }, { "docid": "2f471c24ccb38e70627eba6383c003e0", "text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.", "title": "" }, { "docid": "646b594b713a92a5a0ab6b97ee91d927", "text": "We aim to constrain the evolution of active galactic nuclei (AGNs) as a function of obscuration using an X-ray-selected sample of ∼2000 AGNs from a multi-tiered survey including the CDFS, AEGIS-XD, COSMOS, and XMM-XXL fields. The spectra of individual X-ray sources are analyzed using a Bayesian methodology with a physically realistic model to infer the posterior distribution of the hydrogen column density and intrinsic X-ray luminosity. We develop a novel non-parametric method that allows us to robustly infer the distribution of the AGN population in X-ray luminosity, redshift, and obscuring column density, relying only on minimal smoothness assumptions. Our analysis properly incorporates uncertainties from low count spectra, photometric redshift measurements, association incompleteness, and the limited sample size. We find that obscured AGNs with NH > 1022 cm−2 account for 77 −5% of the number density and luminosity density of the accretion supermassive black hole population with LX > 1043 erg s−1, averaged over cosmic time. Compton-thick AGNs account for approximately half the number and luminosity density of the obscured population, and 38 −7% of the total. We also find evidence that the evolution is obscuration dependent, with the strongest evolution around NH ≈ 1023 cm−2. We highlight this by measuring the obscured fraction in Compton-thin AGNs, which increases toward z ∼ 3, where it is 25% higher than the local value. In contrast, the fraction of Compton-thick AGNs is consistent with being constant at ≈35%, independent of redshift and accretion luminosity. We discuss our findings in the context of existing models and conclude that the observed evolution is, to first order, a side effect of anti-hierarchical growth.", "title": "" }, { "docid": "3a9bba31f77f4026490d7a0faf4aeaa4", "text": "We explore several different document representation models and two query expansion models for the task of recommending blogs to a user in response to a query. Blog relevance ranking differs from traditional document ranking in ad-hoc information retrieval in several ways: (1) the unit of output (the blog) is composed of a collection of documents (the blog posts) rather than a single document, (2) the query represents an ongoing – and typically multifaceted – interest in the topic rather than a passing ad-hoc information need and (3) due to the propensity of spam, splogs, and tangential comments, the blogosphere is particularly challenging to use as a source for high-quality query expansion terms. We address these differences at the document representation level, by comparing retrieval models that view either the blog or its constituent posts as the atomic units of retrieval, and at the query expansion level, by making novel use of the links and anchor text in Wikipedia to expand a user’s initial query. We develop two complementary models of blog retrieval that perform at comparable levels of precision and recall. We also show consistent and significant improvement across all models using our Wikipedia expansion strategy.", "title": "" }, { "docid": "d099cf0b4a74ddb018775b524ec92788", "text": "This report proposes 15 large-scale benchmark problems as an extension to the existing CEC’2010 large-scale global optimization benchmark suite. The aim is to better represent a wider range of realworld large-scale optimization problems and provide convenience and flexibility for comparing various evolutionary algorithms specifically designed for large-scale global optimization. Introducing imbalance between the contribution of various subcomponents, subcomponents with nonuniform sizes, and conforming and conflicting overlapping functions are among the major new features proposed in this report.", "title": "" }, { "docid": "e3e59a258e8867baaf7717063868c150", "text": "This paper proposes a novel methodology FPGA Trust Zone (FTZ) to incorporate security into the design cycle to detect and isolate anomalies such as Hardware Trojans in the FPGA fabric. Anomalies are identified using violation to spatial correlation of process variation in FPGA fabric. Anomalies are isolated using Xilinx Isolation Design Flow (IDF) methodology. FTZ helps identify and partition the FPGA into areas that are devoid of anomalies and thus, assists to run designs securely and reliably even in an anomaly-infected FPGA. FTZ also assists IDF to select trustworthy areas for implementing isolated designs and trusted routes. We demonstrate the effectiveness of FTZ for AES and RC5 designs on Xilinx Virtex-7 and Atrix-7 FPGAs.", "title": "" }, { "docid": "45c9ecc06dca6e18aae89ebf509d31d2", "text": "For estimating causal effects of treatments, randomized experiments are generally considered the gold standard. Nevertheless, they are often infeasible to conduct for a variety of reasons, such as ethical concerns, excessive expense, or timeliness. Consequently, much of our knowledge of causal effects must come from non-randomized observational studies. This article will advocate the position that observational studies can and should be designed to approximate randomized experiments as closely as possible. In particular, observational studies should be designed using only background information to create subgroups of similar treated and control units, where 'similar' here refers to their distributions of background variables. Of great importance, this activity should be conducted without any access to any outcome data, thereby assuring the objectivity of the design. In many situations, this objective creation of subgroups of similar treated and control units, which are balanced with respect to covariates, can be accomplished using propensity score methods. The theoretical perspective underlying this position will be presented followed by a particular application in the context of the US tobacco litigation. This application uses propensity score methods to create subgroups of treated units (male current smokers) and control units (male never smokers) who are at least as similar with respect to their distributions of observed background characteristics as if they had been randomized. The collection of these subgroups then 'approximate' a randomized block experiment with respect to the observed covariates.", "title": "" }, { "docid": "7d0a7073733f8393478be44d820e89ae", "text": "Modeling user-item interaction patterns is an important task for personalized recommendations. Many recommender systems are based on the assumption that there exists a linear relationship between users and items while neglecting the intricacy and non-linearity of real-life historical interactions. In this paper, we propose a neural network based recommendation model (NeuRec) that untangles the complexity of user-item interactions and establish an integrated network to combine non-linear transformation with latent factors. We further design two variants of NeuRec: userbased NeuRec and item-based NeuRec, by focusing on different aspects of the interaction matrix. Extensive experiments on four real-world datasets demonstrated their superior performances on personalized ranking task.", "title": "" }, { "docid": "a9d22e2568bcae7a98af7811546c7853", "text": "This thesis addresses the challenges of building a software system for general-purpose runtime code manipulation. Modern applications, with dynamically-loaded modules and dynamicallygenerated code, are assembled at runtime. While it was once feasible at compile time to observe and manipulate every instruction — which is critical for program analysis, instrumentation, trace gathering, optimization, and similar tools — it can now only be done at runtime. Existing runtime tools are successful at inserting instrumentation calls, but no general framework has been developed for fine-grained and comprehensive code observation and modification without high overheads. This thesis demonstrates the feasibility of building such a system in software. We present DynamoRIO, a fully-implemented runtime code manipulation system that supports code transformations on any part of a program, while it executes. DynamoRIO uses code caching technology to provide efficient, transparent, and comprehensive manipulation of an unmodified application running on a stock operating system and commodity hardware. DynamoRIO executes large, complex, modern applications with dynamically-loaded, generated, or even modified code. Despite the formidable obstacles inherent in the IA-32 architecture, DynamoRIO provides these capabilities efficiently, with zero to thirty percent time and memory overhead on both Windows and Linux. DynamoRIO exports an interface for building custom runtime code manipulation tools of all types. It has been used by many researchers, with several hundred downloads of our public release, and is being commercialized in a product for protection against remote security exploits, one of numerous applications of runtime code manipulation. Thesis Supervisor: Saman Amarasinghe Title: Associate Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "53a49412d75190357df5d159b11843f0", "text": "Perception and reasoning are basic human abilities that are seamlessly connected as part of human intelligence. However, in current machine learning systems, the perception and reasoning modules are incompatible. Tasks requiring joint perception and reasoning ability are difficult to accomplish autonomously and still demand human intervention. Inspired by the way language experts decoded Mayan scripts by joining two abilities in an abductive manner, this paper proposes the abductive learning framework. The framework learns perception and reasoning simultaneously with the help of a trial-and-error abductive process. We present the Neural-Logical Machine as an implementation of this novel learning framework. We demonstrate thatusing human-like abductive learningthe machine learns from a small set of simple hand-written equations and then generalizes well to complex equations, a feat that is beyond the capability of state-of-the-art neural network models. The abductive learning framework explores a new direction for approaching human-level learning ability.", "title": "" }, { "docid": "5a8ac761e486bc58222339fd3e705a75", "text": "Traditional approaches to organizational change have been dominated by assumptions privileging stability, routine, and order. As a result, organizational change has been reified and treated as exceptional rather than natural. In this paper, we set out to offer an account of organizational change on its own terms—to treat change as the normal condition of organizational life. The central question we address is as follows: What must organization(s) be like if change is constitutive of reality? Wishing to highlight the pervasiveness of change in organizations, we talk about organizational becoming. Change, we argue, is the reweaving of actors’ webs of beliefs and habits of action to accommodate new experiences obtained through interactions. Insofar as this is an ongoing process, that is to the extent actors try to make sense of and act coherently in the world, change is inherent in human action, and organizations are sites of continuously evolving human action. In this view, organization is a secondary accomplishment, in a double sense. Firstly, organization is the attempt to order the intrinsic flux of human action, to channel it towards certain ends by generalizing and institutionalizing particular cognitive representations. Secondly, organization is a pattern that is constituted, shaped, and emerging from change. Organization aims at stemming change but, in the process of doing so, it is generated by it. These claims are illustrated by drawing on the work of several organizational ethnographers. The implications of this view for theory and practice are outlined. (Continuous Change; Routines; Process; Improvization; Reflexivity; Emergence; Interaction; Experience) The point is that usually we look at change but we do not see it. We speak of change, but we do not think about it. We say that change exists, that everything changes, that change is the very law of things: Yes, we say it and we repeat it; but those are only words, and we reason and philosophize as though change did not exist. In order to think change and see it, there is a whole veil of prejudices to brush aside, some of them artificial, created by philosophical speculation, the others natural", "title": "" }, { "docid": "11ae8b4875d90f89e5d71c1f5608d7dc", "text": "The aim of this study was to assess the clinical, radiographic and microscopic features of a case series of ossifying fibroma (OF) of the jaws. For the study, all cases with OF diagnosis from the files of the Oral Pathology Laboratory, University of Ribeirão Preto, Ribeirão Preto, SP, Brazil, were reviewed. Clinical data were obtained from the patient files and the radiographic features were evaluated in each case. All cases were reviewed microscopically to confirm the diagnosis. Eight cases were identified, 5 in females and 3 in males. The mean age of the patients was 33.7 years and most lesions (7 cases) occurred in the mandible. Radiographically, all lesions appeared as unilocular images and most of them (5 cases) were of mixed type. The mean size of the tumor was 3.1 cm and 3 cases caused displacement of the involved teeth. Microscopically, all cases showed several bone-like mineralized areas, immersed in the cellular connective tissue. From the 8 cases, 5 underwent surgical excision and 1 patient refused treatment. In the remaining 2 cases, this information was not available. In conclusion, OF occurs more commonly in women in the fourth decade of life, frequently as a mixed radiographic image in the mandible. Coherent differential diagnoses are important to guide the most adequate clinical approach. A correlation between clinical, imaginological and histopathological features is the key to establish the correct diagnosis.", "title": "" }, { "docid": "84dc68051848dedc102b8f2e1603bb8c", "text": "The Primal-Dual hybrid gradient (PDHG) method is a powerful optimization scheme that breaks complex problems into simple sub-steps. Unfortunately, PDHG methods require the user to choose stepsize parameters, and the speed of convergence is highly sensitive to this choice. We introduce new adaptive PDHG schemes that automatically tune the stepsize parameters for fast convergence without user inputs. We prove rigorous convergence results for our methods, and identify the conditions required for convergence. We also develop practical implementations of adaptive schemes that formally satisfy the convergence requirements. Numerical experiments show that adaptive PDHG methods have advantages over non-adaptive implementations in terms of both efficiency and simplicity for the user.", "title": "" }, { "docid": "1c83671ad725908b2d4a6467b23fc83f", "text": "Although many IT and business managers today may be lured into business intelligence (BI) investments by the promise of predictive analytics and emerging BI trends, creating an enterprise-wide BI capability is a journey that takes time. This article describes Norfolk Southern Railway’s BI journey, which began in the early 1990s with departmental reporting, evolved into data warehousing and analytic applications, and has resulted in a company that today uses BI to support corporate strategy. We describe how BI at Norfolk Southern evolved over several decades, with the company developing strong BI foundations and an effective enterprise-wide BI capability. We also identify the practices that kept the BI journey “on track.” These practices can be used by other IT and business leaders as they plan and develop BI capabilities in their own organizations.", "title": "" } ]
scidocsrr
32de30229b864535df14c557663258bf
A hybrid approach to offloading mobile image classification
[ { "docid": "12fe1e2edd640b55a769e5c881822aa6", "text": "In this paper we introduce a runtime system to allow unmodified multi-threaded applications to use multiple machines. The system allows threads to migrate freely between machines depending on the workload. Our prototype, COMET (Code Offload by Migrating Execution Transparently), is a realization of this design built on top of the Dalvik Virtual Machine. COMET leverages the underlying memory model of our runtime to implement distributed shared memory (DSM) with as few interactions between machines as possible. Making use of a new VM-synchronization primitive, COMET imposes little restriction on when migration can occur. Additionally, enough information is maintained so one machine may resume computation after a network failure. We target our efforts towards augmenting smartphones or tablets with machines available in the network. We demonstrate the effectiveness of COMET on several real applications available on Google Play. These applications include image editors, turn-based games, a trip planner, and math tools. Utilizing a server-class machine, COMET can offer significant speed-ups on these real applications when run on a modern smartphone. With WiFi and 3G networks, we observe geometric mean speed-ups of 2.88X and 1.27X relative to the Dalvik interpreter across the set of applications with speed-ups as high as 15X on some applications.", "title": "" } ]
[ { "docid": "4285d9b4b9f63f22033ce9a82eec2c76", "text": "To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright  2001 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "b2cd02622ec0fc29b54e567c7f10a935", "text": "Performance and high availability have become increasingly important drivers, amongst other drivers, for user retention in the context of web services such as social networks, and web search. Exogenic and/or endogenic factors often give rise to anomalies, making it very challenging to maintain high availability, while also delivering high performance. Given that service-oriented architectures (SOA) typically have a large number of services, with each service having a large set of metrics, automatic detection of anomalies is nontrivial. Although there exists a large body of prior research in anomaly detection, existing techniques are not applicable in the context of social network data, owing to the inherent seasonal and trend components in the time series data. To this end, we developed two novel statistical techniques for automatically detecting anomalies in cloud infrastructure data. Specifically, the techniques employ statistical learning to detect anomalies in both application, and system metrics. Seasonal decomposition is employed to filter the trend and seasonal components of the time series, followed by the use of robust statistical metrics – median and median absolute deviation (MAD) – to accurately detect anomalies, even in the presence of seasonal spikes. We demonstrate the efficacy of the proposed techniques from three different perspectives, viz., capacity planning, user behavior, and supervised learning. In particular, we used production data for evaluation, and we report Precision, Recall, and F-measure in each case.", "title": "" }, { "docid": "3a1a8f884d85234099a853d64e87ebd3", "text": "Fault localization, a central aspect of network fault management, is a process of deducing the exact source of a failure from a set of observed failure indications. It has been a focus of research activity since the advent of modern comm unication systems, which produced numerous fault localization techniques. Howeve r, ascommunication systems evolved becoming more complex and offering new capabilities, the requirements imposed on fault localization techniques have changed as well. It is fair to say that despite this research effort, fault localization in complex communication systems remains an open research problem. This paper discusses the challenges of fault localization in complex communication systems and presents an overview of solutions proposed in the course of the last t en years, while discussing their advantages and shortcomings. The survey is followed by the presenta tion of potential directions for future research in this area. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0076b92197a97624a02ebce754be5ee7", "text": "The ubiquitous application of eye tracking is precluded by the requirement of dedicated and expensive hardware, such as infrared high definition cameras. Therefore, systems based solely on appearance (i.e. not involving active infrared illumination) are being proposed in literature. However, although these systems are able to successfully locate eyes, their accuracy is significantly lower than commercial eye tracking devices. Our aim is to perform very accurate eye center location and tracking, using a simple Web cam. By means of a novel relevance mechanism, the proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve rotational invariance and to keep low computational costs. In this paper we test our approach for accurate eye location and robustness to changes in illumination and pose, using the BioIDand the Yale Face B databases, respectively. We demonstrate that our system can achieve a considerable improvement in accuracy over state of the art techniques.", "title": "" }, { "docid": "ec7931f1a56bf7d4dd6cc1a5cb2d0625", "text": "Modern life is intimately linked to the availability of fossil fuels, which continue to meet the world's growing energy needs even though their use drives climate change, exhausts finite reserves and contributes to global political strife. Biofuels made from renewable resources could be a more sustainable alternative, particularly if sourced from organisms, such as algae, that can be farmed without using valuable arable land. Strain development and process engineering are needed to make algal biofuels practical and economically viable.", "title": "" }, { "docid": "2496fa63868717ce2ed56c1777c4b0ed", "text": "Person re-identification (reID) is an important task that requires to retrieve a person’s images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person’s generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN. ‡‡", "title": "" }, { "docid": "7dfb75bba9d6a261a7138199c834ee36", "text": "Approximately 50% of patients with Fisher's syndrome show involvement of the pupillomotor fibers and present with mydriasis and light-near dissociation. However, it is uncertain whether this phenomenon is induced by an aberrant reinnervation mechanism as in tonic pupil, or is based on other mechanisms such as those associated with tectal pupils. We evaluated the clinical course and the pupillary responses in four of 27 patients with Fisher's syndrome who presented with bilateral mydriasis. The pupils of both eyes of the four patients were involved at the early stage of Fisher's syndrome. The pupils in patients 1 and 2 showed mydriasis with apparent light-near dissociation lasting for a significant period and had denervation supersensitivity to cholinergic agents. On the other hand, the pupils of patients 3 and 4 were dilated and fixed to both light and near stimuli. Our observations indicate that the denervated iris sphincter muscles, which are supersensitive to the cholinergic transmitter, may play an important role in the expression of light-near dissociation in Fisher's syndrome. Jpn J Ophthalmol 2007;51:224–227 © Japanese Ophthalmological Society 2007", "title": "" }, { "docid": "63a583de2dbbbd9aada8a685ec9edc78", "text": "BACKGROUND\nVarious nerve blocks with local anaesthetic agents have been used to reduce pain after hip fracture and subsequent surgery. This review was published originally in 1999 and was updated in 2001, 2002, 2009 and 2017.\n\n\nOBJECTIVES\nThis review focuses on the use of peripheral nerves blocks as preoperative analgesia, as postoperative analgesia or as a supplement to general anaesthesia for hip fracture surgery. We undertook the update to look for new studies and to update the methods to reflect Cochrane standards.\n\n\nSEARCH METHODS\nFor the updated review, we searched the following databases: the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 8), MEDLINE (Ovid SP, 1966 to August week 1 2016), Embase (Ovid SP, 1988 to 2016 August week 1) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (EBSCO, 1982 to August week 1 2016), as well as trial registers and reference lists of relevant articles.\n\n\nSELECTION CRITERIA\nWe included randomized controlled trials (RCTs) involving use of nerve blocks as part of the care provided for adults aged 16 years and older with hip fracture.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed new trials for inclusion, determined trial quality using the Cochrane tool and extracted data. When appropriate, we pooled results of outcome measures. We rated the quality of evidence according to the GRADE Working Group approach.\n\n\nMAIN RESULTS\nWe included 31 trials (1760 participants; 897 randomized to peripheral nerve blocks and 863 to no regional blockade). Results of eight trials with 373 participants show that peripheral nerve blocks reduced pain on movement within 30 minutes of block placement (standardized mean difference (SMD) -1.41, 95% confidence interval (CI) -2.14 to -0.67; equivalent to -3.4 on a scale from 0 to 10; I2 = 90%; high quality of evidence). Effect size was proportionate to the concentration of local anaesthetic used (P < 0.00001). Based on seven trials with 676 participants, we did not find a difference in the risk of acute confusional state (risk ratio (RR) 0.69, 95% CI 0.38 to 1.27; I2 = 48%; very low quality of evidence). Three trials with 131 participants reported decreased risk for pneumonia (RR 0.41, 95% CI 0.19 to 0.89; I2 = 3%; number needed to treat for an additional beneficial outcome (NNTB) 7, 95% CI 5 to 72; moderate quality of evidence). We did not find a difference in risk of myocardial ischaemia or death within six months, but the number of participants included was well below the optimal information size for these two outcomes. Two trials with 155 participants reported that peripheral nerve blocks also reduced time to first mobilization after surgery (mean difference -11.25 hours, 95% CI -14.34 to -8.15 hours; I2 = 52%; moderate quality of evidence). One trial with 75 participants indicated that the cost of analgesic drugs was lower when they were given as a single shot block (SMD -3.48, 95% CI -4.23 to -2.74; moderate quality of evidence).\n\n\nAUTHORS' CONCLUSIONS\nHigh-quality evidence shows that regional blockade reduces pain on movement within 30 minutes after block placement. Moderate-quality evidence shows reduced risk for pneumonia, decreased time to first mobilization and cost reduction of the analgesic regimen (single shot blocks).", "title": "" }, { "docid": "f9b3813d806e93cc0a88143c89cd1379", "text": "Deep Neural Networks (DNNs) are very popular these days, and are the subject of a very intense investigation. A DNN is made up of layers of internal units (or neurons), each of which computes an affine combination of the output of the units in the previous layer, applies a nonlinear operator, and outputs the corresponding value (also known as activation). A commonly-used nonlinear operator is the so-called rectified linear unit (ReLU), whose output is just the maximum between its input value and zero. In this (and other similar cases like max pooling, where the max operation involves more than one input value), for fixed parameters one can model the DNN as a 0-1 Mixed Integer Linear Program (0-1 MILP) where the continuous variables correspond to the output values of each unit, and a binary variable is associated with each ReLU to model its yes/no nature. In this paper we discuss the peculiarity of this kind of 0-1 MILP models, and describe an effective bound-tightening technique intended to ease its solution. We also present possible applications of the 0-1 MILP model arising in feature visualization and in the construction of adversarial examples. Computational results are reported, aimed at investigating (on small DNNs) the computational performance of a state-of-the-art MILP solver when applied to a known test case, namely, hand-written digit recognition.", "title": "" }, { "docid": "258acaa6078e9f4344b967503d5c25ac", "text": "Poetry style is not only a high-level abstract semantic information but also an important factor to the success of poetry generation system. Most work on Chinese poetry generation focused on controlling the coherence of the content of the poem and ignored the poetic style of the poem. In this paper, we propose a Poet-based Poetry Generation method which generates poems by controlling not only content selection but also poetic style factor (consistent poetic style expression). The proposed method consists of two stages: Capturing poetic style embedding by modeling poems and high-level abstraction of poetic style in Poetic Style Model, and generating each line sequentially using a modified RNN encoder-decoder framework. Experiments with human evaluation show that our method can generate high-quality poems corresponding to the keywords and poetic style.", "title": "" }, { "docid": "f1c2af06078b6b5c802d773a72fc22ad", "text": "Virtual environments have the potential to become important new research tools in environment behavior research. They could even become the future (virtual) laboratories, if reactions of people to virtual environments are similar to those in real environments. The present study is an exploration of the comparability of research findings in real and virtual environments. In the study, 101 participants explored an identical space, either in reality or in a computer-simulated environment. Additionally, the presence of plants in the space was manipulated, resulting in a 2 (environment) 2 (plants) between-subjects design. Employing a broad set of measurements, we found mixed results. Performances on size estimations and a cognitive mapping task were significantly better in the real environment. Factor analyses of bipolar adjectives indicated that, although four dimensions were similar for both environments, a fifth dimension of environmental assessmenttermedarousalwas absent in the virtual environment. In addition, we found significant differences on the scores of four of the scales. However, no significant interactions appeared between environment and plants. Experience of and behavior in virtual environments have similarities to that in real environments, but there are important differences as well. We conclude that this is not only a necessary, but also a very interesting research subject for environmental psychology.", "title": "" }, { "docid": "056545b30d07f865644024af55159a1c", "text": "Several fast algorithms are presented for computing functions defined on paths in trees under various assumpuons. The algorithms are based on tree mampulatton methods first used to efficiently represent equivalence relations. The algorithms have O((m + n)a(m + n, n)) running tunes, where m and n are measures of the problem size and a Is a functional reverse of Ackermann's function By usmg one or more of these algorithms m combination with other techniques, it is possible to solve the followmg graph problems m O(ma(m, n)) tnne, where m Is the number of edges and n Is the number of vertices m the problem graph A Venfymg a minimum spanning tree m an undirected graph (Best previously known time bound O(m log log n).) B Flndmg dominators in a flow graph (Best previously known tune bound O(n log n + m).) C Solvmg a path problem on a reducible flow graph. (Best previously known time bound. O(m log n) ) Application A is discussed", "title": "" }, { "docid": "437bf63857bf42d2e46362475c9badb4", "text": "Regenerative braking is an effective approach for electric vehicles (EVs) to extend their driving range. A fuzzy-logic-based regenerative braking strategy (RBS) integrated with series regenerative braking is developed in this paper to advance the level of energy-savings. From the viewpoint of securing car stability in braking operations, the braking force distribution between the front and rear wheels so as to accord with the ideal distribution curve are considered to prevent vehicles from experiencing wheel lock and slip phenomena during braking. Then, a fuzzy RBS using the driver’s braking force command, vehicle speed, battery SOC, battery temperature are designed to determine the distribution between friction braking force and regenerative braking force to improve the energy recuperation efficiency. The experimental results on an “LF620” prototype EV validated the feasibility and effectiveness of regenerative braking and showed that the proposed fuzzy RBS was endowed with good control performance. The maximum driving range of LF620 EV was improved by 25.7% compared with non-RBS conditions.", "title": "" }, { "docid": "4efc6eeabd2a3f6c4f376cb3e533f9d1", "text": "Recognizing and distinguishing antonyms from other types of semantic relations is an essential part of language understanding systems. In this paper, we present a novel method for deriving antonym pairs using paraphrase pairs containing negation markers. We further propose a neural network model, AntNET, that integrates morphological features indicative of antonymy into a path-based relation detection algorithm. We demonstrate that our model outperforms state-of-the-art models in distinguishing antonyms from other semantic relations and is capable of efficiently handling multi-word expressions.", "title": "" }, { "docid": "716d55e7accbeacb94af37a6773c0ad9", "text": "A novel dual-band unidirectional antenna composed of an irregular shorted patch and planar dipole elements is presented and studied. This proposed antenna employs an L-shaped feeding strip for exciting dual-band operations. For the lower frequency band, a V-slot is loaded on the planar dipole for acting as a capacitive loading for having wide impedance matching of the antenna. Then, two additional smaller planar dipoles are placed and connected to the irregular short patch for achieving another wideband performance at the higher frequency band. The proposed antenna has wide dual-band impedance bandwidths of 34% and 49.5% in the lower and the upper bands, ranged from 0.78 GHz to 1.1 GHz and from 1.58 GHz to 2.62 GHz, respectively. More importantly, the antenna has stable gains of 7 dBi and 8 dBi across each band, demonstrating the high stability of the radiation characteristic for the two radiation modes. The radiation patterns are symmetric, low back-lobe radiation and low cross-polarization. Analysis and parametric studies of the proposed antenna are provided.", "title": "" }, { "docid": "f141bd66dc2a842c21f905e3e01fa93c", "text": "In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature", "title": "" }, { "docid": "412a5d414b2ac845a14af5e05abe5f6f", "text": "Organizations often face challenges with the adoption and use of new information systems, and social media is no exception. In this exploratory case study, we aim to identify internal and external challenges related to the adoption and use of social media in a large case company. Our findings show that internal challenges include resources, ownership, authorization, attitudes and economic issues, whereas external challenges are associated with company reputation, legal issues and public/private network identity. We add to the knowledge created by previous studies by introducing the challenges related to ownership and authorization of social media services, which were found to be of high relevance in corporate social media adoption. In order to overcome these obstacles, we propose that organizations prepare strategies and guidelines for social media adoption and use.", "title": "" }, { "docid": "e8a36f2eeae3cdd1bf2d83680aa9f82f", "text": "We conducted a study to track the emotions, their behavioral correlates, and relationship with performance when novice programmers learned the basics of computer programming in the Python language. Twenty-nine participants without prior programming experience completed the study, which consisted of a 25 minute scaffolding phase (with explanations and hints) and a 15 minute fadeout phase (no explanations or hints) with a computerized learning environment. Emotional states were tracked via retrospective self-reports in which learners viewed videos of their faces and computer screens recorded during the learning session and made judgments about their emotions at approximately 100 points. The results indicated that flow/engaged (23%), confusion (22%), frustration (14%), and boredom (12%) were the major emotions students experienced, while curiosity, happiness, anxiety, surprise, anger, disgust, fear, and sadness were comparatively rare. The emotions varied as a function of instructional scaffolds and were systematically linked to different student behaviors (idling, constructing code, running code). Boredom, flow/engaged, and confusion were also correlated with performance outcomes. Implications of our findings for affect-sensitive learning interventions are discussed.", "title": "" } ]
scidocsrr
5903c5726eff7934e75da97ed4ff5d98
Neural Arithmetic Expression Calculator
[ { "docid": "68a90df0f3de170d64d3245c8b316460", "text": "In this paper, we propose a new framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom. Our framework combines the state-of-the-art reinforcement learning approach (Asynchronous Advantage Actor-Critic (A3C) model [Mnih et al. (2016)]) with curriculum learning. Our model is simple in design and only uses game states from the AI side, rather than using opponents’ information [Lample & Chaplot (2016)]. On a known map, our agent won 10 out of the 11 attended games and the champion of Track1 in ViZDoom AI Competition 2016 by a large margin, 35% higher score than the second place.", "title": "" } ]
[ { "docid": "7747ea744400418a9003f8bd0990fe71", "text": "0747-5632/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.chb.2009.06.001 * Tel.: +82 02 74", "title": "" }, { "docid": "2f133fee0bcf939268880d6ad9d07b45", "text": "Biological and artificial neural systems are composed of many local processors, and their capabilities depend upon the transfer function that relates each local processor’s outputs to its inputs. This paper uses a recent advance in the foundations of information theory to study the properties of local processors that use contextual input to amplify or attenuate transmission of information about their driving inputs. This advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i.e. synergy. The decompositions that we report here show that contextual modulation has information processing properties that contrast with those of all four simple arithmetic operators, that it can take various forms, and that the form used in our previous studies of artificial nets composed of local processors with both driving and contextual inputs is particularly well-suited to provide the distinctive capabilities of contextual modulation under a wide range of conditions. We argue that the decompositions reported here could be compared with those obtained from empirical neurobiological and psychophysical data under conditions thought to reflect contextual modulation. That would then shed new light on the underlying processes involved. Finally, we suggest that such decompositions could aid the design of context-sensitive machine learning algorithms.", "title": "" }, { "docid": "47de26ecd5f759afa7361c7eff9e9b25", "text": "At many teaching hospitals, it is common practice for on-call radiology residents to interpret radiology examinations; such reports are later reviewed and revised by an attending physician before being used for any decision making. In case there are substantial problems in the resident’s initial report, the resident is called and the problems are reviewed to prevent similar future reporting errors. However, due to the large volume of reports produced, attending physicians rarely discuss the problems side by side with residents, thus missing an educational opportunity. In this work, we introduce a pipeline to discriminate between reports with significant discrepancies and those with non-significant discrepancies. The former contain severe errors or mis-interpretations, thus representing a great learning opportunity for the resident; the latter presents only minor differences (often stylistic) and have a minor role in the education of a resident. By discriminating between the two, the proposed system could flag those reports that an attending radiology should definitely review with residents under their supervision. We evaluated our approach on 350 manually annotated radiology reports sampled from a collection of tens of thousands. The proposed classifier achieves an Area Under the Curve (AUC) of 0.837, which represent a 14% improvement over the baselines. Furthermore, the classifier reduces the False Negative Rate (FNR) by 52%, a desirable performance metric for any recall-oriented task such as the one studied", "title": "" }, { "docid": "9eca36b888845c82cc9e65e6bc0db053", "text": "Word embeddings resulting from neural language models have been shown to be a great asset for a large variety of NLP tasks. However, such architecture might be difficult and time-consuming to train. Instead, we propose to drastically simplify the word embeddings computation through a Hellinger PCA of the word cooccurence matrix. We compare those new word embeddings with some well-known embeddings on named entity recognition and movie review tasks and show that we can reach similar or even better performance. Although deep learning is not really necessary for generating good word embeddings, we show that it can provide an easy way to adapt embeddings to specific tasks.", "title": "" }, { "docid": "494bc0a3ab30c86853de630ae632b3d4", "text": "Although the biomechanical properties of the various types of running foot strike (rearfoot, midfoot, and forefoot) have been studied extensively in the laboratory, only a few studies have attempted to quantify the frequency of running foot strike variants among runners in competitive road races. We classified the left and right foot strike patterns of 936 distance runners, most of whom would be considered of recreational or sub-elite ability, at the 10 km point of a half-marathon/marathon road race. We classified 88.9% of runners at the 10 km point as rearfoot strikers, 3.4% as midfoot strikers, 1.8% as forefoot strikers, and 5.9% of runners exhibited discrete foot strike asymmetry. Rearfoot striking was more common among our sample of mostly recreational distance runners than has been previously reported for samples of faster runners. We also compared foot strike patterns of 286 individual marathon runners between the 10 km and 32 km race locations and observed increased frequency of rearfoot striking at 32 km. A large percentage of runners switched from midfoot and forefoot foot strikes at 10 km to rearfoot strikes at 32 km. The frequency of discrete foot strike asymmetry declined from the 10 km to the 32 km location. Among marathon runners, we found no significant relationship between foot strike patterns and race times.", "title": "" }, { "docid": "e3affa55e444e7d1be87ebc1ef2dc014", "text": "OBJECTIVE\nDescribe objectively the global gaps in policy, data gathering capacity, and resources to develop and implement services to support child mental health.\n\n\nMETHODS\nReport on the World health Organization (WHO) child and adolescent mental health resources Atlas project. The Atlas project utilized key informants and was supplemented by studies that focused on policy. This report also draws on current epidemiological studies to provide a context for understanding the magnitude of the clinical problem.\n\n\nRESULTS\nCurrent global epidemiological data consistently reports that up to 20% of children and adolescents suffer from a disabling mental illness; that suicide is the third leading cause of death among adolescents; and that up to 50% of all adult mental disorders have their onset in adolescence. While epidemiological data appears relatively uniform globally, the same is not true for policy and resources for care. The gaps in resources for child mental health can be categorized as follows: economic, manpower, training, services and policy. Key findings from the Atlas project include: lack of program development in low income countries; lack of any policy in low income countries and absent specific comprehensive policy in both low and high income countries; lack of data gathering capacity including that for country-level epidemiology and services outcomes; failure to provide social services in low income countries; lack of a continuum of care; and universal barriers to access. Further, the Atlas findings underscored the need for a critical analysis of the 'burden of disease' as it relates to the context of child and adolescent mental disorders, and the importance of defining the degree of 'impairment' of specific disorders in different cultures.\n\n\nCONCLUSIONS\nThe recent finding of substantial gaps in resources for child mental health underscores the need for enhanced data gathering, refinement of the economic argument for care, and need for innovative training approaches.", "title": "" }, { "docid": "d3281adf2e84a5bab8b03ab9ee8a2977", "text": "The concept of Learning Health Systems (LHS) is gaining momentum as more and more electronic healthcare data becomes increasingly accessible. The core idea is to enable learning from the collective experience of a care delivery network as recorded in the observational data, to iteratively improve care quality as care is being provided in a real world setting. In line with this vision, much recent research effort has been devoted to exploring machine learning, data mining and data visualization methodologies that can be used to derive real world evidence from diverse sources of healthcare data to provide personalized decision support for care delivery and care management. In this chapter, we will give an overview of a wide range of analytics and visualization components we have developed, examples of clinical insights reached from these components, and some new directions we are taking.", "title": "" }, { "docid": "02647d7ab54cc2ae1af5ce156e63f742", "text": "In intelligent transportation systems (ITS), transportation infrastructure is complimented with information and communication technologies with the objectives of attaining improved passenger safety, reduced transportation time and fuel consumption and vehicle wear and tear. With the advent of modern communication and computational devices and inexpensive sensors it is possible to collect and process data from a number of sources. Data fusion (DF) is collection of techniques by which information from multiple sources are combined in order to reach a better inference. DF is an inevitable tool for ITS. This paper provides a survey of how DF is used in different areas of ITS. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f3bda47434c649f6b8fad89199ff5987", "text": "Structural health monitoring (SHM) of civil infrastructure using wireless smart sensor networks (WSSNs) has received significant public attention in recent years. The benefits of WSSNs are that they are low-cost, easy to install, and provide effective data management via on-board computation. This paper reports on the deployment and evaluation of a state-of-the-art WSSN on the new Jindo Bridge, a cable-stayed bridge in South Korea with a 344-m main span and two 70-m side spans. The central components of the WSSN deployment are the Imote2 smart sensor platforms, a custom-designed multimetric sensor boards, base stations, and software provided by the Illinois Structural Health Monitoring Project (ISHMP) Services Toolsuite. In total, 70 sensor nodes and two base stations have been deployed to monitor the bridge using an autonomous SHM application with excessive wind and vibration triggering the system to initiate monitoring. Additionally, the performance of the system is evaluated in terms of hardware durability, software stability, power consumption and energy harvesting capabilities. The Jindo Bridge SHM system constitutes the largest deployment of wireless smart sensors for civil infrastructure monitoring to date. This deployment demonstrates the strong potential of WSSNs for monitoring of large scale civil infrastructure.", "title": "" }, { "docid": "965b13ed073b4f3d1c97beffe4db1397", "text": "The purpose of this study was to develop a method of classifying cancers to specific diagnostic categories based on their gene expression signatures using artificial neural networks (ANNs). We trained the ANNs using the small, round blue-cell tumors (SRBCTs) as a model. These cancers belong to four distinct diagnostic categories and often present diagnostic dilemmas in clinical practice. The ANNs correctly classified all samples and identified the genes most relevant to the classification. Expression of several of these genes has been reported in SRBCTs, but most have not been associated with these cancers. To test the ability of the trained ANN models to recognize SRBCTs, we analyzed additional blinded samples that were not previously used for the training procedure, and correctly classified them in all cases. This study demonstrates the potential applications of these methods for tumor diagnosis and the identification of candidate targets for therapy.", "title": "" }, { "docid": "79465d290ab299b9d75e9fa617d30513", "text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.", "title": "" }, { "docid": "87013e0d8d312c16a1d23f4a6917d4fe", "text": "IT consumerization – defined as the use of privately-owned IT resources for business purposes – is steadily growing, thus creating new challenges for enterprises. While numerous practitioner studies suggest a positive effect of this trend on employee work performance, IS research still lacks a systematic understanding of the forces underlying this relationship. In order to close this research gap, we derive three major effects of IT consumerization on employees: 1) an increased workload 2) an elevated autonomy and 3) a higher level of competence. Drawing on cognitive stress model and self-determination theory, we develop an innovative theoretical model of the relationships between IT consumerization and work performance. We then conduct an embedded singlecase study, in order to evaluate the constructs and relationships of our structural model by means of qualitative research. Subsequently, the implications for th organizing and practicing IT consumerization are discussed and suggestions on further developing this study are presented.", "title": "" }, { "docid": "2587fd3fa405a8e0fcbfd78bb1201e6d", "text": "After many years of development the active electronically scanned array (AESA) radar technology reached a mature technology level. Many of today's and future radar systems will be equipped with the ASEA technology. T/R-modules are key elements in active phased array antennas for radar and electronic warfare applications. Meanwhile T/R-modules using GaAs MMICs are in mass production with high quantities. Top priority is on continuous improvement of yield figures by optimizing the spread of key performance parameters to come down with cost. To fulfill future demands on power, bandwidth, robustness, weight, multifunctional sensor capability, and overall sensor cost, new emerging semiconductor and packaging technologies have to be implemented for the next generation T/R-modules. Using GaN MMICs as HPAs and also as robust LNAs is a promising approach. Higher integration at the amplitude and phase setting section of the T/R-module is realized with GaAs core chips or even with SiGe multifunction chips. With increasing digital signal processing capability the digital beam forming will get more importance with a high impact on the T/R-modules. For lower production costs but also for sensor integration new packaging concepts are necessary. This includes the transition towards organic packages or the transition from brick style T/R-module to a tile T/R-module.", "title": "" }, { "docid": "6abe1b7806f6452bbcc087b458a7ef96", "text": "We demonstrate distributed, online, and real-time cooperative localization and mapping between multiple robots operating throughout an unknown environment using indirect measurements. We present a novel Expectation Maximization (EM) based approach to efficiently identify inlier multi-robot loop closures by incorporating robot pose uncertainty, which significantly improves the trajectory accuracy over long-term navigation. An EM and hypothesis based method is used to determine a common reference frame. We detail a 2D laser scan correspondence method to form robust correspondences between laser scans shared amongst robots. The implementation is experimentally validated using teams of aerial vehicles, and analyzed to determine its accuracy, computational efficiency, scalability to many robots, and robustness to varying environments. We demonstrate through multiple experiments that our method can efficiently build maps of large indoor and outdoor environments in a distributed, online, and real-time setting.", "title": "" }, { "docid": "b80151949d837ffffdc680e9822b9691", "text": "Neuronal activity causes local changes in cerebral blood flow, blood volume, and blood oxygenation. Magnetic resonance imaging (MRI) techniques sensitive to changes in cerebral blood flow and blood oxygenation were developed by high-speed echo planar imaging. These techniques were used to obtain completely noninvasive tomographic maps of human brain activity, by using visual and motor stimulus paradigms. Changes in blood oxygenation were detected by using a gradient echo (GE) imaging sequence sensitive to the paramagnetic state of deoxygenated hemoglobin. Blood flow changes were evaluated by a spin-echo inversion recovery (IR), tissue relaxation parameter T1-sensitive pulse sequence. A series of images were acquired continuously with the same imaging pulse sequence (either GE or IR) during task activation. Cine display of subtraction images (activated minus baseline) directly demonstrates activity-induced changes in brain MR signal observed at a temporal resolution of seconds. During 8-Hz patterned-flash photic stimulation, a significant increase in signal intensity (paired t test; P less than 0.001) of 1.8% +/- 0.8% (GE) and 1.8% +/- 0.9% (IR) was observed in the primary visual cortex (V1) of seven normal volunteers. The mean rise-time constant of the signal change was 4.4 +/- 2.2 s for the GE images and 8.9 +/- 2.8 s for the IR images. The stimulation frequency dependence of visual activation agrees with previous positron emission tomography observations, with the largest MR signal response occurring at 8 Hz. Similar signal changes were observed within the human primary motor cortex (M1) during a hand squeezing task and in animal models of increased blood flow by hypercapnia. By using intrinsic blood-tissue contrast, functional MRI opens a spatial-temporal window onto individual brain physiology.", "title": "" }, { "docid": "11de03383fbd4178613eb4bdf47b90be", "text": "Question Generation (QG) and Question Answering (QA) are some of the many challenges for natural language understanding and interfaces. As humans need to ask good questions, the potential benefits from automated QG systems may assist them in meeting useful inquiry needs. In this paper, we consider an automatic Sentence-to-Question generation task, where given a sentence, the Question Generation (QG) system generates a set of questions for which the sentence contains, implies, or needs answers. To facilitate the question generation task, we build elementary sentences from the input complex sentences using a syntactic parser. A named entity recognizer and a part of speech tagger are applied on each of these sentences to encode necessary information. We classify the sentences based on their subject, verb, object and preposition for determining the possible type of questions to be generated. We use the TREC-2007 (Question Answering Track) dataset for our experiments and evaluation. Mots-clés : Génération de questions, Analyseur syntaxique, Phrases élémentaires, POS Tagging.", "title": "" }, { "docid": "07f9b0c1d6a5ffae7b04dd7a5acd291d", "text": "Cryptographic techniques have applications far beyond the obvious uses of encoding and decoding information. For Internet developers who need to know about capabilities, such as digital signatures, that depend on cryptographic techniques, theres no better overview than Applied Cryptography, the definitive book on the subject. Bruce Schneier covers general classes of cryptographic protocols and then specific techniques, detailing the inner workings of real-world cryptographic algorithms including the Data Encryption Standard and RSA public-key cryptosystems. The book includes source-code listings and extensive advice on the practical aspects of cryptography implementation, such as the importance of generating truly random numbers and of keeping keys secure.", "title": "" }, { "docid": "012f34d140fff916cc0127d8cdf9b20d", "text": "We propose rule-based search systems that outperform not only the state-of-the-art but the human performance, measured in accuracy, in GuessWhat?!, a vision-language game where either of two players can be a human. Although those systems achieve the high accuracy, they do not meet other requirements to be considered as an AI system that communicates effectively with humans. To clarify what they lack, we suggest the use of three criteria to enable effective communication with humans in vision-language tasks. These criteria also apply to other two-player vision-language tasks that require communication with humans, e.g., ReferIt.", "title": "" }, { "docid": "00abd2d2c2bdbb81510bac1fb123ad39", "text": "This letter, for the first time, investigates interactive logic cell schemes and transistor architecture scaling options for 5-nm technology node (N5) and beyond. The proposed novel transistors, such as Hexagonal NanoWire (NW) and NanoRing (NR) architectures, are introduced having higher current drivability and lower parasitic capacitance than conventional NW or NanoSlab devices. The standard cell sizing options, including a 1-fin-per-device version and a 2-fin-per-device design, are systematically evaluated. Each device flavor has multiple vertical stacks when wire-like or slab-like structure is used. Comprehensive transistor and logic cell studies demonstrate that the novel NR is the optimal structure for N5 and beyond.", "title": "" } ]
scidocsrr
a7fdc60eed9a1de7950a96108c5d4937
Psychometric Properties and Validation of the Arabic Social Media Addiction Scale
[ { "docid": "bbd44633e14d9ac1d8e54839cbdf5150", "text": "This study aimed to examine social media addiction in a sample of university students. Based on the Internet addiction scale developed by Young (1996) the researcher used cross-sectional survey methodology in which a questionnaire was distributed to 1327 undergraduate students with their consent. Factor analysis of the self-report data showed that social media addiction has three independent dimensions. These dimensions were positively related to the users experience with social media; time spent using social media and satisfaction with them. In addition, social media addiction was a negative predictor of academic performance as measured by a student's GPA. Future studies should consider the cultural values of users and examine the context of social media usage.", "title": "" } ]
[ { "docid": "ec6f53bd2cbc482c1450934b1fd9e463", "text": "Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.", "title": "" }, { "docid": "ca072e97f8a5486347040aeaa7909d60", "text": "Camera-based stereo-vision provides cost-efficient vision capabilities for robotic systems. The objective of this paper is to examine the performance of stereo-vision as means to enable a robotic inspection cell for haptic quality testing with the ability to detect relevant information related to the inspection task. This information comprises the location and 3D representation of a complex object under inspection as well as the location and type of quality features which are subject to the inspection task. Among the challenges is the low-distinctiveness of features in neighboring area, inconsistent lighting, similar colors as well as low intra-class variances impeding the retrieval of quality characteristics. The paper presents the general outline of the vision chain as well as performance analysis of various algorithms for relevant steps in the machine vision chain thus indicating the capabilities and drawbacks of a camera-based stereo-vision for flexible use in complex machine vision tasks.", "title": "" }, { "docid": "89e25ae1d0f5dbe3185a538c2318b447", "text": "This paper presents a fully-integrated 3D image radar engine utilizing beamforming for electrical scanning and precise ranging technique for distance measurement. Four transmitters and four receivers form a sensor frontend with phase shifters and power combiners adjusting the beam direction. A built-in 31.3 GHz clock source and a frequency tripler provide both RF carrier and counting clocks for the distance measurement. Flip-chip technique with low-temperature co-fired ceramic (LTCC) antenna design creates a miniature module as small as 6.5 × 4.4 × 0.8 cm3. Designed and fabricated in 65 nm CMOS technology, the transceiver array chip dissipates 960 mW from a 1.2-V supply and occupies chip area of 3.6 × 2.1 mm 2. This prototype achieves ±28° scanning range, 2-m maximum distance, and 1 mm depth resolution.", "title": "" }, { "docid": "66adce6a959d325e741e9cb3a0c62a42", "text": "This paper explores the mathematical and algorithmic properties of two sample-based texture models: random phase noise (RPN) and asymptotic discrete spot noise (ADSN). These models permit to synthesize random phase textures. They arguably derive from linearized versions of two early Julesz texture discrimination theories. The ensuing mathematical analysis shows that, contrarily to some statements in the literature, RPN and ADSN are different stochastic processes. Nevertheless, numerous experiments also suggest that the textures obtained by these algorithms from identical samples are perceptually similar. The relevance of this study is enhanced by three technical contributions providing solutions to obstacles that prevented the use of RPN or ADSN to emulate textures. First, RPN and ADSN algorithms are extended to color images. Second, a preprocessing is proposed to avoid artifacts due to the nonperiodicity of real-world texture samples. Finally, the method is extended to synthesize textures with arbitrary size from a given sample.", "title": "" }, { "docid": "b8419a3450b66dfc2defebf296d1fb8f", "text": "We assessed the usefulness of routine MRI for the differential diagnosis of Parkinson's disease (PD) with “atypical” parkinsonian syndromes in everyday clinical practice. We studied routinely performed MRI in PD (n = 32), multiple system atrophy (MSA, n = 28), progressive supranuclear palsy (PSP, n = 30), and corticobasal degeneration (CBD, n = 26). From a preliminary analysis of 26 items, 4 independent investigators rated 11 easily recognizable MRI pointers organized as a simple scoring system. The frequency, severity and inter-rater agreement were determined. The total severity score was subdivided into “cortical”, “putaminal”, “midbrain”, and “pontocerebellar” scores. The frequency of putaminal involvement (100%) and vermian cerebellar atrophy (45%) was significantly higher in MSA, but that of cortical atrophy (50%), midbrain atrophy and 3rd ventricle enlargement (75%) was higher in PSP and CBD. The median total score fairly differentiated “atypical” parkinsonian syndromes from PD (positive predictive value-PPV-90%). However, the median total score was unable to differentiate atypical parkinsonian syndromes each other. The “cortical” score distinguished CBD and PSP from MSA with a fair PPV (>90%). The PPV of the “putaminal” score was high (70%) for the differential diagnosis of MSA with PSP and CBD. The “midbrain” score was significantly higher in PSP and CBD compared to MSA. These results are in accordance with the underlying pathology found in these disorders and demonstrate that a simple MRI scoring procedure may help the neurologist to differentiate primary causes of parkinsonism in everyday practice.", "title": "" }, { "docid": "34ff8cd119a77057ccfc0ee682dfc0ac", "text": "A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.", "title": "" }, { "docid": "42aca9ffdd5c0d2a2f310280d12afa1a", "text": "Communication skills courses are an essential component of undergraduate and postgraduate training and effective communication skills are actively promoted by medical defence organisations as a means of decreasing litigation. This article discusses active listening, a difficult discipline for anyone to practise, and examines why this is particularly so for doctors. It draws together themes from key literature in the field of communication skills, and examines how these theories apply in general practice.", "title": "" }, { "docid": "13ec102cd2f9f80fbb827cd702a57a8b", "text": "This paper presents a mutual capacitive touch screen panel (TSP) readout IC (ROIC) with a differential continuousmode parallel operation architecture (DCPA). The proposed architecture achieves a high product of signal-to-noise ratio (SNR) and frame rate, which is a requirement of ROIC for large-sized TSP. DCPA is accomplished by using the proposed differential sensing method with a parallel architecture in a continuousmode. This architecture is implemented using a continuous-type transmitter for parallel signaling and a differential-architecture receiver. A continuous-type differential charge amplifier removes the common-mode noise component, and reduces the self-noise by the band-pass filtering effect of the continuous-mode charge amplifier. In addition, the differential parallel architecture cancels the timing skew problem caused by the continuous-mode parallel operation and effectively enhances the power spectrum density of the signal. The proposed ROIC was fabricated using a 0.18-μm CMOS process and occupied an active area of 1.25 mm2. The proposed system achieved a 72 dB SNR and 240 Hz frame rate with a 32 channel TX by 10 channel RX mutual capacitive TSP. Moreover, the proposed differential-parallel architecture demonstrated higher immunity to lamp noise and display noise. The proposed system consumed 42.5 mW with a 3.3-V supply.", "title": "" }, { "docid": "780e49047bdacda9862c51338aa1397f", "text": "We consider stochastic volatility models under parameter uncertainty and investigate how model derived prices of European options are affected. We let the pricing parameters evolve dynamically in time within a specified region, and formalise the problem as a control problem where the control acts on the parameters to maximise/minimise the option value. Through a dual representation with backward stochastic differential equations, we obtain explicit equations for Heston’s model and investigate several numerical solutions thereof. In an empirical study, we apply our results to market data from the S&P 500 index where the model is estimated to historical asset prices. We find that the conservative model-prices cover 98% of the considered market-prices for a set of European call options.", "title": "" }, { "docid": "1c16fa259b56e3d64f2468fdf758693a", "text": "Dysregulated expression of microRNAs (miRNAs) in various tissues has been associated with a variety of diseases, including cancers. Here we demonstrate that miRNAs are present in the serum and plasma of humans and other animals such as mice, rats, bovine fetuses, calves, and horses. The levels of miRNAs in serum are stable, reproducible, and consistent among individuals of the same species. Employing Solexa, we sequenced all serum miRNAs of healthy Chinese subjects and found over 100 and 91 serum miRNAs in male and female subjects, respectively. We also identified specific expression patterns of serum miRNAs for lung cancer, colorectal cancer, and diabetes, providing evidence that serum miRNAs contain fingerprints for various diseases. Two non-small cell lung cancer-specific serum miRNAs obtained by Solexa were further validated in an independent trial of 75 healthy donors and 152 cancer patients, using quantitative reverse transcription polymerase chain reaction assays. Through these analyses, we conclude that serum miRNAs can serve as potential biomarkers for the detection of various cancers and other diseases.", "title": "" }, { "docid": "eb1430bea17d7e6311980d697f7313bc", "text": "Pragmatics performs agile development, has been rated at CMMI (Capability Maturity Model Integration) Maturity Level 4, and is striving to achieve CMMI Maturity Level (CML) 5. By maturing our agile disciplines, we feel we will not only improve the performance of our agile teams, which will ultimately benefit our agile development practices regardless of our appraisal rating, but will also lead to our being appraised at CML 5. This experience report describes the steps we are taking to improve our agile development disciplines, which we believe will lead to our being appraised at CML 5.", "title": "" }, { "docid": "24c2877aff9c4e8441dbbbd4481370b6", "text": "Ramp merging is a critical maneuver for road safety and traffic efficiency. Most of the current automated driving systems developed by multiple automobile manufacturers and suppliers are typically limited to restricted access freeways only. Extending the automated mode to ramp merging zones presents substantial challenges. One is that the automated vehicle needs to incorporate a future objective (e.g. a successful and smooth merge) and optimize a long-term reward that is impacted by subsequent actions when executing the current action. Furthermore, the merging process involves interaction between the merging vehicle and its surrounding vehicles whose behavior may be cooperative or adversarial, leading to distinct merging countermeasures that are crucial to successfully complete the merge. In place of the conventional rule-based approaches, we propose to apply reinforcement learning algorithm on the automated vehicle agent to find an optimal driving policy by maximizing the long-term reward in an interactive driving environment. Most importantly, in contrast to most reinforcement learning applications in which the action space is resolved as discrete, our approach treats the action space as well as the state space as continuous without incurring additional computational costs. Our unique contribution is the design of the Q-function approximation whose format is structured as a quadratic function, by which simple but effective neural networks are used to estimate its coefficients. The results obtained through the implementation of our training platform demonstrate that the vehicle agent is able to learn a safe, smooth and timely merging policy, indicating the effectiveness and practicality of our approach.", "title": "" }, { "docid": "ba02d2ecf18d0bc24a3e8884b5de54ed", "text": "We present glasses: Global optimisation with Look-Ahead through Stochastic Simulation and Expected-loss Search. The majority of global optimisation approaches in use are myopic, in only considering the impact of the next function value; the non-myopic approaches that do exist are able to consider only a handful of future evaluations. Our novel algorithm, glasses, permits the consideration of dozens of evaluations into the future. This is done by approximating the ideal look-ahead loss function, which is expensive to evaluate, by a cheaper alternative in which the future steps of the algorithm are simulated beforehand. An Expectation Propagation algorithm is used to compute the expected value of the loss. We show that the far-horizon planning thus enabled leads to substantive performance gains in empirical tests.", "title": "" }, { "docid": "3988a78bd37adaed9dc8af87c7e1266b", "text": "Current R&D project was the development of a software platform designed to be an advanced research testbed for the prototyping of Haskell based novel technologies in Cryo-EM Methodologies. Focused upon software architecture concepts and frameworks involving Haskell image processing libraries. Cryo-EM is an important tool to probe nano-bio systems.A number of hi-tech firms are implementing BIG-DATA analysis using Haskell especially in the domains of Pharma,Bio-informatics etc. Hence current research paper is one of the pioneering attempts made by the author to encourage advanced data analysis in the Cryo-EM domain to probe important aspects of nano-bio applications.", "title": "" }, { "docid": "37ccaaf82bd001e48ef1d4a2651a5700", "text": "In a wireless network with a single source and a single destination and an arbitrary number of relay nodes, what is the maximum rate of information flow achievable? We make progress on this long standing problem through a two-step approach. First, we propose a deterministic channel model which captures the key wireless properties of signal strength, broadcast and superposition. We obtain an exact characterization of the capacity of a network with nodes connected by such deterministic channels. This result is a natural generalization of the celebrated max-flow min-cut theorem for wired networks. Second, we use the insights obtained from the deterministic analysis to design a new quantize-map-and-forward scheme for Gaussian networks. In this scheme, each relay quantizes the received signal at the noise level and maps it to a random Gaussian codeword for forwarding, and the final destination decodes the source's message based on the received signal. We show that, in contrast to existing schemes, this scheme can achieve the cut-set upper bound to within a gap which is independent of the channel parameters. In the case of the relay channel with a single relay as well as the two-relay Gaussian diamond network, the gap is 1 bit/s/Hz. Moreover, the scheme is universal in the sense that the relays need no knowledge of the values of the channel parameters to (approximately) achieve the rate supportable by the network. We also present extensions of the results to multicast networks, half-duplex networks, and ergodic networks.", "title": "" }, { "docid": "4fabfd530004921901d09134ebfd0eae", "text": "“Additive Manufacturing Technologies: 3D Printing, Rapid Prototyping, and Direct Digital Manufacturing” is authored by Ian Gibson, David Rosen and Brent Stucker, who collectively possess 60 years’ experience in the fi eld of additive manufacturing (AM). This is the second edition of the book which aims to include current developments and innovations in a rapidly changing fi eld. Its primary aim is to serve as a teaching aid for developing and established curricula, therefore becoming an all-encompassing introductory text for this purpose. It is also noted that researchers may fi nd the text useful as a guide to the ‘state-of-the-art’ and to identify research opportunities. The book is structured to provide justifi cation and information for the use and development of AM by using standardised terminology to conform to standards (American Society for Testing and Materials (ASTM) F42) introduced since the fi rst edition. The basic principles and historical developments for AM are introduced in summary in the fi rst three chapters of the book and this serves as an excellent introduction for the uninitiated. Chapters 4–11 focus on the core technologies of AM individually and, in most cases, in comprehensive detail which gives those interested in the technical application and development of the technologies a solid footing. The remaining chapters provide guidelines and examples for various stages of the process including machine and/or materials selection, design considerations and software limitations, applications and post-processing considerations.", "title": "" }, { "docid": "c1f7a1733193356f430be594585e4dfe", "text": "A helicopter offers the capability of hover, slow forward displacement, vertical take-off and landing while a conventional airplane has the performance of fast forward movement, long reach and superior endurance. The aim of this paper is to present the modelling and control of a tilt tri-rotor UAV's configuration that combines the advantages of both rotary wing and fixed wing vehicle.", "title": "" }, { "docid": "ab7184c576396a1da32c92093d606a53", "text": "Power electronics has progressively gained an important status in power generation, distribution, and consumption. With more than 70% of electricity processed through power electronics, recent research endeavors to improve the reliability of power electronic systems to comply with more stringent constraints on cost, safety, and availability in various applications. This paper serves to give an overview of the major aspects of reliability in power electronics and to address the future trends in this multidisciplinary research direction. The ongoing paradigm shift in reliability research is presented first. Then, the three major aspects of power electronics reliability are discussed, respectively, which cover physics-of-failure analysis of critical power electronic components, state-of-the-art design for reliability process and robustness validation, and intelligent control and condition monitoring to achieve improved reliability under operation. Finally, the challenges and opportunities for achieving more reliable power electronic systems in the future are discussed.", "title": "" }, { "docid": "7401d33980f6630191aa7be7bf380ec3", "text": "We present PennCOSYVIO, a new challenging Visual Inertial Odometry (VIO) benchmark with synchronized data from a VI-sensor (stereo camera and IMU), two Project Tango hand-held devices, and three GoPro Hero 4 cameras. Recorded at UPenn's Singh center, the 150m long path of the hand-held rig crosses from outdoors to indoors and includes rapid rotations, thereby testing the abilities of VIO and Simultaneous Localization and Mapping (SLAM) algorithms to handle changes in lighting, different textures, repetitive structures, and large glass surfaces. All sensors are synchronized and intrinsically and extrinsically calibrated. We demonstrate the accuracy with which ground-truth poses can be obtained via optic localization off of fiducial markers. The data set can be found at https://daniilidis-group.github.io/penncosyvio/.", "title": "" }, { "docid": "acbb1a68d9e0e1768fff8acc8ae42b32", "text": "The rapid increase in the number of Android malware poses great challenges to anti-malware systems, because the sheer number of malware samples overwhelms malware analysis systems. The classification of malware samples into families, such that the common features shared by malware samples in the same family can be exploited in malware detection and inspection, is a promising approach for accelerating malware analysis. Furthermore, the selection of representative malware samples in each family can drastically decrease the number of malware to be analyzed. However, the existing classification solutions are limited because of the following reasons. First, the legitimate part of the malware may misguide the classification algorithms because the majority of Android malware are constructed by inserting malicious components into popular apps. Second, the polymorphic variants of Android malware can evade detection by employing transformation attacks. In this paper, we propose a novel approach that constructs frequent subgraphs (fregraphs) to represent the common behaviors of malware samples that belong to the same family. Moreover, we propose and develop FalDroid, a novel system that automatically classifies Android malware and selects representative malware samples in accordance with fregraphs. We apply it to 8407 malware samples from 36 families. Experimental results show that FalDroid can correctly classify 94.2% of malware samples into their families using approximately 4.6 sec per app. FalDroid can also dramatically reduce the cost of malware investigation by selecting only 8.5% to 22% representative samples that exhibit the most common malicious behavior among all samples.", "title": "" } ]
scidocsrr
8444857887bd145012d26e4fb96ccb81
Performance characterization and acceleration of in-memory file systems for Hadoop and Spark applications on HPC clusters
[ { "docid": "bd39bc6b8038332c13601da16363b70b", "text": "Hadoop MapReduce is the most popular open-source parallel programming model extensively used in Big Data analytics. Although fault tolerance and platform independence make Hadoop MapReduce the most popular choice for many users, it still has huge performance improvement potentials. Recently, RDMA-based design of Hadoop MapReduce has alleviated major performance bottlenecks with the implementation of many novel design features such as in-memory merge, prefetching and caching of map outputs, and overlapping of merge and reduce phases. Although these features reduce the overall execution time for MapReduce jobs compared to the default framework, further improvement is possible if shuffle and merge phases can also be overlapped with the map phase during job execution. In this paper, we propose HOMR (a Hybrid approach to exploit maximum Overlapping in MapReduce), that incorporates not only the features implemented in RDMA-based design, but also exploits maximum possible overlapping among all different phases compared to current best approaches. Our solution introduces two key concepts: Greedy Shuffle Algorithm and On-demand Shuffle Adjustment, both of which are essential to achieve significant performance benefits over the default MapReduce framework. Architecture of HOMR is generalized enough to provide performance efficiency both over different Sockets interface as well as previous RDMA-based designs over InfiniBand. Performance evaluations show that HOMR with RDMA over InfiniBand can achieve performance benefits of 54% and 56% compared to default Hadoop over IPoIB (IP over InfiniBand) and 10GigE, respectively. Compared to the previous best RDMA-based designs, this benefit is 29%. HOMR over Sockets also achieves a maximum of 38-40% benefit compared to default Hadoop over Sockets interface. We also evaluate our design with real-world workloads like SWIM and PUMA, and observe benefits of up to 16% and 18%, respectively, over the previous best-case RDMA-based design. To the best of our knowledge, this is the first approach to achieve maximum possible overlapping for MapReduce framework.", "title": "" } ]
[ { "docid": "707828ef765512b0b5ebef27ca133504", "text": "In the mammalian myocardium, potassium (K(+)) channels control resting potentials, action potential waveforms, automaticity, and refractory periods and, in most cardiac cells, multiple types of K(+) channels that subserve these functions are expressed. Molecular cloning has revealed the presence of a large number of K(+) channel pore forming (alpha) and accessory (beta) subunits in the heart, and considerable progress has been made recently in defining the relationships between expressed K(+) channel subunits and functional cardiac K(+) channels. To date, more than 20 mouse models with altered K(+) channel expression/functioning have been generated using dominant-negative transgenic and targeted gene deletion approaches. In several instances, the genetic manipulation of K(+) channel subunit expression has revealed the role of specific K(+) channel subunit subfamilies or individual K(+) channel subunit genes in the generation of myocardial K(+) channels. In other cases, however, the phenotypic consequences have been unexpected. This review summarizes what has been learned from the in situ genetic manipulation of cardiac K(+) channel functioning in the mouse, discusses the limitations of the models developed to date, and explores the likely directions of future research.", "title": "" }, { "docid": "d2b45d76e93f07ededbab03deee82431", "text": "A cordless battery charger will greatly improve the user friendliness of electric vehicles (EVs), accelerating the replacement of traditional internal combustion engine (ICE) vehicles with EVs and improving energy sustainability as a result. Resonant circuits are used for both the power transmitter and receiver of a cordless charger to compensate their coils and improve power transfer efficiency. However, conventional compensation circuit topology is not suitable for application to an EV, which involves very large power, a wide gap between the transmitter and receiver coils, and large horizontal misalignment. This paper proposes a novel compensation circuit topology that has a carefully designed series capacitor added to the parallel resonant circuit of the receiver. The proposed circuit has been implemented and tested on an EV. The simulation and experimental results are presented to show that the circuit can improve the power factor and power transfer efficiency, and as a result, allow a larger gap between the transmitter and receiver coils.", "title": "" }, { "docid": "397c25e6381818eabadf23d214409e45", "text": "s of Invited Talks Plagiarizing Nature for Engineering Analysis and Design", "title": "" }, { "docid": "a7f4a57534ee0a02b675e3b7acdf53d3", "text": "Semantic-oriented service matching is one of the challenges in automatic Web service discovery. Service users may search for Web services using keywords and receive the matching services in terms of their functional profiles. A number of approaches to computing the semantic similarity between words have been developed to enhance the precision of matchmaking, which can be classified into ontology-based and corpus-based approaches. The ontology-based approaches commonly use the differentiated concept information provided by a large ontology for measuring lexical similarity with word sense disambiguation. Nevertheless, most of the ontologies are domain-special and limited to lexical coverage, which have a limited applicability. On the other hand, corpus-based approaches rely on the distributional statistics of context to represent per word as a vector and measure the distance of word vectors. However, the polysemous problem may lead to a low computational accuracy. In this paper, in order to augment the semantic information content in word vectors, we propose a multiple semantic fusion (MSF) model to generate sense-specific vector per word. In this model, various semantic properties of the general-purpose ontology WordNet are integrated to fine-tune the distributed word representations learned from corpus, in terms of vector combination strategies. The retrofitted word vectors are modeled as semantic vectors for estimating semantic similarity. The MSF model-based similarity measure is validated against other similarity measures on multiple benchmark datasets. Experimental results of word similarity evaluation indicate that our computational method can obtain higher correlation coefficient with human judgment in most cases. Moreover, the proposed similarity measure is demonstrated to improve the performance of Web service matchmaking based on a single semantic resource. Accordingly, our findings provide a new method and perspective to understand and represent lexical semantics.", "title": "" }, { "docid": "20adf89d9301cdaf64d8bf684886de92", "text": "A standard planar Kernel Density Estimation (KDE) aims to produce a smooth density surface of spatial point events over a 2-D geographic space. However the planar KDE may not be suited for characterizing certain point events, such as traffic accidents, which usually occur inside a 1-D linear space, the roadway network. This paper presents a novel network KDE approach to estimating the density of such spatial point events. One key feature of the new approach is that the network space is represented with basic linear units of equal network length, termed lixel (linear pixel), and related network topology. The use of lixel not only facilitates the systematic selection of a set of regularly spaced locations along a network for density estimation, but also makes the practical application of the network KDE feasible by significantly improving the computation efficiency. The approach is implemented in the ESRI ArcGIS environment and tested with the year 2005 traffic accident data and a road network in the Bowling Green, Kentucky area. The test results indicate that the new network KDE is more appropriate than standard planar KDE for density estimation of traffic accidents, since the latter covers space beyond the event context (network space) and is likely to overestimate the density values. The study also investigates the impacts on density calculation from two kernel functions, lixel lengths, and search bandwidths. It is found that the kernel function is least important in structuring the density pattern over network space, whereas the lixel length critically impacts the local variation details of the spatial density pattern. The search bandwidth imposes the highest influence by controlling the smoothness of the spatial pattern, showing local effects at a narrow bandwidth and revealing \" hot spots \" at larger or global scales with a wider bandwidth. More significantly, the idea of representing a linear network by a network system of equal-length lixels may potentially 3 lead the way to developing a suite of other network related spatial analysis and modeling methods.", "title": "" }, { "docid": "cdb556e7f951c8f0572b5ac0b6468597", "text": "Modern 3D laser-range scanners have a high data rate, making online simultaneous localization and mapping (SLAM) computationally challenging. Recursive state estimation techniques are efficient but commit to a state estimate immediately after a new scan is made, which may lead to misalignments of measurements. We present a 3D SLAM approach that allows for refining alignments during online mapping. Our method is based on efficient local mapping and a hierarchical optimization back-end. Measurements of a 3D laser scanner are aggregated in local multiresolution maps by means of surfel-based registration. The local maps are used in a multi-level graph for allocentric mapping and localization. In order to incorporate corrections when refining the alignment, the individual 3D scans in the local map are modeled as a sub-graph and graph optimization is performed to account for drift and misalignments in the local maps. Furthermore, in each sub-graph, a continuous-time representation of the sensor trajectory allows to correct measurements between scan poses. We evaluate our approach in multiple experiments by showing qualitative results. Furthermore, we quantify the map quality by an entropy-based measure.", "title": "" }, { "docid": "8259d1acdfec480fc20a585f64bf54e2", "text": "Technical debt describes the effect of immature software artifacts on software maintenance - the potential of extra effort required in future as if paying interest for the incurred debt. The uncertainty of interest payment further complicates the problem of what debt should be incurred or repaid and when. To help software managers make informed decisions, a portfolio approach is proposed in this paper. The approach leverages the portfolio management theory in the finance domain to determine the optimal collection of technical debt items that should be incurred or held. We expect this approach could provide a new perspective for technical debt management.", "title": "" }, { "docid": "b171fe4c33fe322b0491576da078305c", "text": "Structural equation modeling (SEM) is a viable multivariate tool used by communication researchers for the past quarter century. Building off Cappella (1975) as well as McPhee and Babrow (1987), this study summarizes the use of this technique from 1995–2000 in 37 communication-based academic journals. We identify and critically assess 3 unique methods for testing structural relationships via SEM in terms of the specification, estimation, and evaluation of their respective structural equation models. We provide general guidelines for the use of SEM and make recommendations concerning latent variable models, sample size, reporting parameter estimates, model fit statistics, cross-sectional data, univariate normality, cross-validation, nonrecursive modeling, and the decomposition of effects (direct, indirect, and total).", "title": "" }, { "docid": "8326f993dbb83e631d2e6892e03520e7", "text": "Within NASA, there is an increasing awareness that software is of growing importance to the success of missions. Much data has been collected, and many theories have been advanced on how to reduce or eliminate errors in code. However, learning requires experience. This article documents a new NASA initiative to build a centralized repository of software defect data; in particular, it documents one specific case study on software metrics. Software metrics are used as a basis for prediction of errors in code modules, but there are many different metrics available. McCabe is one of the more popular tools used to produce metrics, but, as will be shown in this paper, other metrics can be more significant.", "title": "" }, { "docid": "74273502995ceaac87737d274379d7dc", "text": "Majority of the systems designed to handle big RDF data rely on a single high-end computer dedicated to a certain RDF dataset and do not easily scale out, at the same time several clustered solution were tested and both the features and the benchmark results were unsatisfying. In this paper we describe a system designed to tackle such issues, a system that connects RDF4J and Apache HBase in order to receive an extremely scalable RDF store.", "title": "" }, { "docid": "2a3700cec51962d35e4579c41d5aa902", "text": "This paper develops a novel control system for functional electrical stimulation (FES) locomotion, which aims to generate normal locomotion for paraplegics via FES. It explores the possibility of applying ideas from biology to engineering. The neural control mechanism of the biological motor system, the central pattern generator, has been adopted in the control system design. Some artificial control techniques such as neural network control, fuzzy logic, control and impedance control are incorporated to refine the control performance. Several types of sensory feedback are integrated to endow this control system with an adaptive ability. A musculoskeletal model with 7 segments and 18 muscles is constructed for the simulation study. Satisfactory simulation results are achieved under this FES control system, which indicates a promising technique for the potential application of FES locomotion in future.", "title": "" }, { "docid": "77fdc64a8b79458cd942dc39c88b5bb5", "text": "Artificial Neural networks (ANNs) are one of the most well-established machine learning techniques and have a wide range of applications, such as Recognition, Mining and Synthesis (RMS). As many of these applications are inherently error-tolerant, in this work, we propose a novel approximate computing framework for ANN, namely ApproxANN. When compared to existing solutions, ApproxANN considers approximation for both computation and memory accesses, thereby achieving more energy savings. To be specific, ApproxANN characterizes the impact of neurons on the output quality in an effective and efficient manner, and judiciously determine how to approximate the computation and memory accesses of certain less critical neurons to achieve the maximum energy efficiency gain under a given quality constraint. Experimental results on various ANN applications with different datasets demonstrate the efficacy of the proposed solution.", "title": "" }, { "docid": "074cc516bd8cf4ff6457f38420a3bf3d", "text": "A transition from coplanar waveguide (CPW)-to-microstrip with vias is often used in wafer-probe measurements. This paper shows how field and impedance matching are used to develop a wideband transition. This paper demonstrates how the presence and placement of the vias affect the bandwidth and alters the impedance of the transition. The measured results on a transition show that a wideband transition with return loss better than 10 dB and an insertion loss less than 1.5 dB up to 36.64 GHz is obtained. The measurements show excellent agreement with simulation. The work presented in this paper provides a better understanding about a CPW-to-microstrip transition with vias as well as design procedures and principles that can be utilized to facilitate the realization of a broadband CPW-to-microstrip transition with vias.", "title": "" }, { "docid": "422c0890804654613ea37fbf1186fda1", "text": "Because of the distance between the skull and brain and their di erent resistivities, electroencephalographic (EEG) data collected from any point on the human scalp includes activity generated within a large brain area. This spatial smearing of EEG data by volume conduction does not involve signi cant time delays, however, suggesting that the Independent Component Analysis (ICA) algorithm of Bell and Sejnowski [1] is suitable for performing blind source separation on EEG data. The ICA algorithm separates the problem of source identi cation from that of source localization. First results of applying the ICA algorithm to EEG and event-related potential (ERP) data collected during a sustained auditory detection task show: (1) ICA training is insensitive to di erent random seeds. (2) ICA may be used to segregate obvious artifactual EEG components (line and muscle noise, eye movements) from other sources. (3) ICA is capable of isolating overlapping EEG phenomena, including alpha and theta bursts and spatially-separable ERP components, to separate ICA channels. (4) Nonstationarities in EEG and behavioral state can be tracked using ICA via changes in the amount of residual correlation between ICAltered output channels.", "title": "" }, { "docid": "914d17433df678e9ace1c9edd1c968d3", "text": "We propose a Deep Learning approach to the visual question answering task, where machines answer to questions about real-world images. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We evaluate our approaches on the DAQUAR as well as the VQA dataset where we also report various baselines, including an analysis how much information is contained in the language part only. To study human consensus, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Finally, we evaluate a rich set of design choices how to encode, combine and decode information in our proposed Deep Learning formulation.", "title": "" }, { "docid": "7438ff346fa26661822a3a96c13c6d6e", "text": "As in any new technology adoption in organizations, big data solutions (BDS) also presents some security threat and challenges, especially due to the characteristics of big data itself the volume, velocity and variety of data. Even though many security considerations associated to the adoption of BDS have been publicized, it remains unclear whether these publicized facts have any actual impact on the adoption of the solutions. Hence, it is the intent of this research-in-progress to examine the security determinants by focusing on the influence that various technological factors in security, organizational security view and security related environmental factors have on BDS adoption. One technology adoption framework, the TOE (technological-organizational-environmental) framework is adopted as the main conceptual research framework. This research will be conducted using a Sequential Explanatory Mixed Method approach. Quantitative method will be used for the first part of the research, specifically using an online questionnaire survey. The result of this first quantitative process will then be further explored and complemented with a case study. Results generated from both quantitative and qualitative phases will then be triangulated and a cross-study synthesis will be conducted to form the final result and discussion.", "title": "" }, { "docid": "e6912f1b9e6060b452f2313766288e97", "text": "The air-core inductance of power transformers is measured using a nonideal low-power rectifier. Its dc output serves to drive the transformer into deep saturation, and its ripple provides low-amplitude variable excitation. The principal advantage of the proposed method is its simplicity. For validation, the experimental results are compared with 3-D finite-element simulations.", "title": "" }, { "docid": "806b837d8af1e22794b67a43bdf5a954", "text": "Context: GitHub, nowadays the most popular social coding platform, has become the reference for mining Open Source repositories, a growing research trend aiming at learning from previous software projects to improve the development of new ones. In the last years, a considerable amount of research papers have been published reporting findings based on data mined from GitHub. As the community continues to deepen in its understanding of software engineering thanks to the analysis performed on this platform, we believe that it is worthwhile to reflect on how research papers have addressed the task of mining GitHub and what findings they have reported. Objective: The main objective of this paper is to identify the quantity, topic, and empirical methods of research works, targeting the analysis of how software development practices are influenced by the use of a distributed social coding platform like GitHub. Method: A systematic mapping study was conducted with four research questions and assessed 80 publications from 2009 to 2016. Results: Most works focused on the interaction around coding-related tasks and project communities. We also identified some concerns about how reliable were these results based on the fact that, overall, papers used small data sets and poor sampling techniques, employed a scarce variety of methodologies and/or were hard to replicate. Conclusions: This paper attested the high activity of research work around the field of Open Source collaboration, especially in the software domain, revealed a set of shortcomings and proposed some actions to mitigate them. We hope that this paper can also create the basis for additional studies on other collaborative activities (like book writing for instance) that are also moving to GitHub.", "title": "" }, { "docid": "edf744b475ec90a123685b4f178506c0", "text": "Web servers are ubiquitous, remotely accessible, and often misconfigured. In addition, custom web-based applications may introduce vulnerabilities that are overlooked even by the most security-conscious server administrators. Consequently, web servers are a popular target for hackers. To mitigate the security exposure associated with web servers, intrusion detection systems are deployed to analyze and screen incoming requests. The goal is to perform early detection of malicious activity and possibly prevent more serious damage to the protected site. Even though intrusion detection is critical for the security of web servers, the intrusion detection systems available today only perform very simple analyses and are often vulnerable to simple evasion techniques. In addition, most systems do not provide sophisticated attack languages that allow a system administrator to specify custom, complex attack scenarios to be detected. This paper presents WebSTAT, an intrusion detection system that analyzes web requests looking for evidence of malicious behavior. The system is novel in several ways. First of all, it provides a sophisticated language to describe multistep attacks in terms of states and transitions. In addition, the modular nature of the system supports the integrated analysis of network traffic sent to the server host, operating system-level audit data produced by the server host, and the access logs produced by the web server. By correlating different streams of events, it is possible to achieve more effective detection of web-based attacks.", "title": "" }, { "docid": "fc049d0c325aac0c5123c6bbfc353976", "text": "Clustering on multi-type relational data has attracted more and more attention in recent years due to its high impact on various important applications, such as Web mining, e-commerce and bioinformatics. However, the research on general multi-type relational data clustering is still limited and preliminary. The contribution of the paper is three-fold. First, we propose a general model, the collective factorization on related matrices, for multi-type relational data clustering. The model is applicable to relational data with various structures. Second, under this model, we derive a novel algorithm, the spectral relational clustering, to cluster multi-type interrelated data objects simultaneously. The algorithm iteratively embeds each type of data objects into low dimensional spaces and benefits from the interactions among the hidden structures of different types of data objects. Extensive experiments demonstrate the promise and effectiveness of the proposed algorithm. Third, we show that the existing spectral clustering algorithms can be considered as the special cases of the proposed model and algorithm. This demonstrates the good theoretic generality of the proposed model and algorithm.", "title": "" } ]
scidocsrr
7eceb0370618e49a4c2876da310343d3
Dependency Parsing Techniques for Information Extraction
[ { "docid": "8bb0e19b03468313a52a1800a56f21db", "text": "DeSR is a statistical transition-based dependency parser which learns from annotated corpora which actions to perform for building parse trees while scanning a sentence. We describe recent improvements to the parser, in particular stacked parsing, exploiting a beam search strategy and using a Multilayer Perceptron classifier. For the Evalita 2009 Dependency Parsing task DesR was configured to use a combination of stacked parsers. The stacked combination achieved the best accuracy scores in both the main and pilot subtasks. The contribution to the result of various choices is analyzed, in particular for taking advantage of the peculiar features of the TUT Treebank.", "title": "" } ]
[ { "docid": "79fc27c21305f5ff2db35f3529db8909", "text": "Bundling techniques provide a visual simplification of a graph drawing or trail set, by spatially grouping similar graph edges or trails. This way, the structure of the visualization becomes simpler and thereby easier to comprehend in terms of assessing relations that are encoded by such paths, such as finding groups of strongly interrelated nodes in a graph, finding connections between spatial regions on a map linked by a number of vehicle trails, or discerning the motion structure of a set of objects by analyzing their paths. In this state of the art report, we aim to improve the understanding of graph and trail bundling via the following main contributions. First, we propose a data-based taxonomy that organizes bundling methods on the type of data they work on (graphs vs trails, which we refer to as paths). Based on a formal definition of path bundling, we propose a generic framework that describes the typical steps of all bundling algorithms in terms of high-level operations and show how existing method classes implement these steps. Next, we propose a description of tasks that bundling aims to address. Finally, we provide a wide set of example applications of bundling techniques and relate these to the above-mentioned taxonomies. Through these contributions, we aim to help both researchers and users to understand the bundling landscape as well as its technicalities.", "title": "" }, { "docid": "964deb65d393564f62b9df68fa1b00d9", "text": "Inferring abnormal glucose events such as hyperglycemia and hypoglycemia is crucial for the health of both diabetic patients and non-diabetic people. However, regular blood glucose monitoring can be invasive and inconvenient in everyday life. We present SugarMate, a first smartphone-based blood glucose inference system as a temporary alternative to continuous blood glucose monitors (CGM) when they are uncomfortable or inconvenient to wear. In addition to the records of food, drug and insulin intake, it leverages smartphone sensors to measure physical activities and sleep quality automatically. Provided with the imbalanced and often limited measurements, a challenge of SugarMate is the inference of blood glucose levels at a fine-grained time resolution. We propose Md3RNN, an efficient learning paradigm to make full use of the available blood glucose information. Specifically, the newly designed grouped input layers, together with the adoption of a deep RNN model, offer an opportunity to build blood glucose models for the general public based on limited personal measurements from single-user and grouped-users perspectives. Evaluations on 112 users demonstrate that Md3RNN yields an average accuracy of 82.14%, significantly outperforming previous learning methods those are either shallow, generically structured, or oblivious to grouped behaviors. Also, a user study with the 112 participants shows that SugarMate is acceptable for practical usage.", "title": "" }, { "docid": "9b160fe780000fa624f45a8edd699ba6", "text": "In this paper, to solve the problems of hexapod robot's foot frequent collisions with hard ground impact and other issues, we establish the foot-modeled and its parameters identification in the geological environment, and compile Fortran language which is based on the foot-modeled with the secondary development program by the ADAMS dynamic link and the overall hexagonal hexapod robot dynamics simulates in co-simulation of MATLAB and ADAMS. Through the analysis of the simulation results, the correctness and universality of the foot-modeled and the geological environment is verified.", "title": "" }, { "docid": "de016ffaace938c937722f8a47cc0275", "text": "Conventional traffic light detection methods often suffers from false positives in urban environment because of the complex backgrounds. To overcome such limitation, this paper proposes a method that combines a conventional approach, which is fast but weak to false positives, and a DNN, which is not suitable for detecting small objects but a very powerful classifier. Experiments on real data showed promising results.", "title": "" }, { "docid": "1f714aea64a7d23743e507724e4d531b", "text": "At the mo ment, S upport Ve ctor Machine ( SVM) has been widely u sed i n t he study of stock investment related topics. Stock investment can be further divided into three s trategies such as: buy, sell and hold. Using data concerning China Steel Corporation, this article adopts genetic algorithm for the search of the best SVM parameter and the selection of the best SVM prediction variable, then it will be compared with Logistic Regression for the classification prediction capability of stock investment. From the classification prediction result and the result of AUC of the models presented in this article, it can be seen that the SVM after adjustment of input variables and parameters will have classification prediction capability relatively superior to that of the other three models.", "title": "" }, { "docid": "3133829dd980cc1b428d80890cded347", "text": "Finger vein images are rich in orientation and edge features. Inspired by the edge histogram descriptor proposed in MPEG-7, this paper presents an efficient orientation-based local descriptor, named histogram of salient edge orientation map (HSEOM). HSEOM is based on the fact that human vision is sensitive to edge features for image perception. For a given image, HSEOM first finds oriented edge maps according to predefined orientations using a well-known edge operator and obtains a salient edge orientation map by choosing an orientation with the maximum edge magnitude for each pixel. Then, subhistograms of the salient edge orientation map are generated from the nonoverlapping submaps and concatenated to build the final HSEOM. In the experiment of this paper, eight oriented edge maps were used to generate a salient edge orientation map for HSEOM construction. Experimental results on our available finger vein image database, MMCBNU_6000, show that the performance of HSEOM outperforms that of state-of-the-art orientation-based methods (e.g., Gabor filter, histogram of oriented gradients, and local directional code). Furthermore, the proposed HSEOM has advantages of low feature dimensionality and fast implementation for a real-time finger vein recognition system.", "title": "" }, { "docid": "b3a2a9eceeef68789e7782bcb5a8e072", "text": "To present a new method for building boundary detection and extraction based on the active contour model, is the main objective of this research. Classical models of this type are associated with several shortcomings; they require extensive initialization, they are sensitive to noise, and adjustment issues often become problematic with complex images. In this research a new model of active contours has been proposed that is optimized for the automatic building extraction. This new active contour model, in comparison to the classical ones, can detect and extract the building boundaries more accurately, and is capable of avoiding detection of the boundaries of features in the neighborhood of buildings such as streets and trees. Finally, the detected building boundaries are generalized to obtain a regular shape for building boundaries. Tests with our proposed model demonstrate excellent accuracy in terms of building boundary extraction. However, due to the radiometric similarity between building roofs and the image background, our system fails to recognize a few buildings. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f264d5b90dfb774e9ec2ad055c4ebe62", "text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.", "title": "" }, { "docid": "10187e22397b1c30b497943764d32c34", "text": "Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.", "title": "" }, { "docid": "a6a9376f6205d5c2bc48964d482b6443", "text": "Enrollment in online courses is rapidly increasing and attrition rates remain high. This paper presents a literature review addressing the role of interactivity in student satisfaction and persistence in online learning. Empirical literature was reviewed through the lens of Bandura's social cognitive theory, Anderson's interaction equivalency theorem, and Tinto's social integration theory. Findings suggest that interactivity is an important component of satisfaction and persistence for online learners, and that preferences for types of online interactivity vary according to type of learner. Student–instructor interaction was also noted to be a primary variable in online student satisfaction and persistence.", "title": "" }, { "docid": "48a75e28154d630da14fd3dba09d0af8", "text": "Over the years, artificial intelligence (AI) is spreading its roots in different areas by utilizing the concept of making the computers learn and handle complex tasks that previously require substantial laborious tasks by human beings. With better accuracy and speed, AI is helping lawyers to streamline work processing. New legal AI software tools like Catalyst, Ross intelligence, and Matlab along with natural language processing provide effective quarrel resolution, better legal clearness, and superior admittance to justice and fresh challenges to conventional law firms providing legal services using leveraged cohort correlate model. This paper discusses current applications of legal AI and suggests deep learning and machine learning techniques that can be applied in future to simplify the cumbersome legal tasks.", "title": "" }, { "docid": "be29160b73b9ab727eb760a108a7254a", "text": "Two-dimensional (2-D) analytical permanent-magnet (PM) eddy-current loss calculations are presented for slotless PM synchronous machines (PMSMs) with surface-inset PMs considering the current penetration effect. In this paper, the term slotless implies that either the stator is originally slotted but the slotting effects are neglected or the stator is originally slotless. The analytical magnetic field distribution is computed in polar coordinates from the 2-D subdomain method (i.e., based on formal resolution of Maxwell's equation applied in subdomain). Based on the predicted magnetic field distribution, the eddy-currents induced in the PMs are analytically obtained and the PM eddy-current losses considering eddy-current reaction field are calculated. The analytical expressions can be used for slotless PMSMs with any number of phases and any form of current and overlapping winding distribution. The effects of stator slotting are neglected and the current density distribution is modeled by equivalent current sheets located on the slot opening. To evaluate the efficacy of the proposed technique, the 2-D PM eddy-current losses for two slotless PMSMs are analytically calculated and compared with those obtained by 2-D finite-element analysis (FEA). The effects of the rotor rotational speed and the initial rotor mechanical angular position are investigated. The analytical results are in good agreement with those obtained by the 2-D FEA.", "title": "" }, { "docid": "3373fdc2e85ae0c2163229ed848c8bec", "text": "This paper shows how to carry out batch continuous-time trajectory estimation for bodies translating and rotating in three-dimensional (3D) space, using a very efficient form of Gaussian-process (GP) regression. The method is fast, singularity-free, uses a physically motivated prior (the mean is constant body-centric velocity), and permits trajectory queries at arbitrary times through GP interpolation. Landmark estimation can be folded in to allow for simultaneous trajectory estimation and mapping (STEAM), a variant of SLAM.", "title": "" }, { "docid": "a818a70bd263617eb3089cde9e9d1bb9", "text": "The paper proposes identifying relevant information sources from the history of combined searching and browsing behavior of many Web users. While it has been previously shown that user interactions with search engines can be employed to improve document ranking, browsing behavior that occurs beyond search result pages has been largely overlooked in prior work. The paper demonstrates that users' post-search browsing activity strongly reflects implicit endorsement of visited pages, which allows estimating topical relevance of Web resources by mining large-scale datasets of search trails. We present heuristic and probabilistic algorithms that rely on such datasets for suggesting authoritative websites for search queries. Experimental evaluation shows that exploiting complete post-search browsing trails outperforms alternatives in isolation (e.g., clickthrough logs), and yields accuracy improvements when employed as a feature in learning to rank for Web search.", "title": "" }, { "docid": "d74f489606f770d199ea540aea67d689", "text": "While mobile shopping (m-shopping) is convenient in the age of mobile commerce, many people still discontinue their shopping using mobile devices even though they have the experience. Therefore, it is important for the m-shopping providers to understand the determinants of m-shopping continuance. This study extended Expectation-Confirmation Model (ECM) to examine the determinants of m-shopping continuance by incorporating trust. There were 244 effective questionnaires obtained for survey analysis. The results showed that trust can overcome the limitations of ECM on predicating m-shopping continuance and improve the explanatory power of initial ECM.", "title": "" }, { "docid": "27bf126da661051da506926f7d9de632", "text": "In this paper, we propose an novel implementation of a simultaneous localization and mapping (SLAM) system based on a monocular camera from an unmanned aerial vehicle (UAV) using Depth prediction performed with Capsule Networks (CapsNet), which possess improvements over the drawbacks of the more widely-used Convolutional Neural Networks (CNN). An Extended Kalman Filter will assist in estimating the position of the UAV so that we are able to update the belief for the environment. Results will be evaluated on a benchmark dataset to portray the accuracy of our intended approach.", "title": "" }, { "docid": "c1f1c40bb7fc3c27f843a65bc713d06b", "text": "Indoor air quality is important. It influences human productivity and health. Personal pollution exposure can be measured using stationary or mobile sensor networks, but each of these approaches has drawbacks. Stationary sensor network accuracy suffers because it is difficult to place a sensor in every location people might visit. In mobile sensor networks, accuracy and drift resistance are generally sacrificed for the sake of mobility and economy. We propose a hybrid sensor network architecture, which contains both stationary sensors (for accurate readings and calibration) and mobile sensors (for coverage). Our technique uses indoor pollutant concentration prediction models to determine the structure of the hybrid sensor network. In this work, we have (1) developed a predictive model for pollutant concentration that minimizes prediction error; (2) developed algorithms for hybrid sensor network construction; and (3) deployed a sensor network to gather data on the airflow in a building, which are later used to evaluate the prediction model and hybrid sensor network synthesis algorithm. Our modeling technique reduces sensor network error by 40.4% on average relative to a technique that does not explicitly consider the inaccuracies of individual sensors. Our hybrid sensor network synthesis technique improves personal exposure measurement accuracy by 35.8% on average compared with a stationary sensor network architecture.", "title": "" }, { "docid": "2dbc68492e54d61446dac7880db71fdd", "text": "Supervised deep learning methods have shown promising results for the task of monocular depth estimation; but acquiring ground truth is costly, and prone to noise as well as inaccuracies. While synthetic datasets have been used to circumvent above problems, the resultant models do not generalize well to natural scenes due to the inherent domain shift. Recent adversarial approaches for domain adaption have performed well in mitigating the differences between the source and target domains. But these methods are mostly limited to a classification setup and do not scale well for fully-convolutional architectures. In this work, we propose AdaDepth - an unsupervised domain adaptation strategy for the pixel-wise regression task of monocular depth estimation. The proposed approach is devoid of above limitations through a) adversarial learning and b) explicit imposition of content consistency on the adapted target representation. Our unsupervised approach performs competitively with other established approaches on depth estimation tasks and achieves state-of-the-art results in a semi-supervised setting.", "title": "" }, { "docid": "c432a44e48e777a7a3316c1474f0aa12", "text": "In this paper, we present an algorithm that generates high dynamic range (HDR) images from multi-exposed low dynamic range (LDR) stereo images. The vast majority of cameras in the market only capture a limited dynamic range of a scene. Our algorithm first computes the disparity map between the stereo images. The disparity map is used to compute the camera response function which in turn results in the scene radiance maps. A refinement step for the disparity map is then applied to eliminate edge artifacts in the final HDR image. Existing methods generate HDR images of good quality for still or slow motion scenes, but give defects when the motion is fast. Our algorithm can deal with images taken during fast motion scenes and tolerate saturation and radiometric changes better than other stereo matching algorithms.", "title": "" }, { "docid": "e7cf2e5d05818eaded8a5565a9bf42e4", "text": "We design and implement the first private and anonymous decentralized crowdsourcing system ZebraLancer, and overcome two fundamental challenges of decentralizing crowdsourcing, i.e. data leakage and identity breach. First, our outsource-then-prove methodology resolves the tension between blockchain transparency and data confidentiality, which is critical in crowdsourcing use-case. ZebraLancer ensures: (i) a requester will not pay more than what data deserve, according to a policy announced when her task is published via the blockchain; (ii) each worker indeed gets a payment based on the policy, if he submits data to the blockchain; (iii) the above properties are realized not only without a central arbiter, but also without leaking the data to the open blockchain. Furthermore, the transparency of blockchain allows one to infer private information about workers and requesters through their participation history. On the other hand, allowing anonymity will enable a malicious worker to submit multiple times to reap rewards. ZebraLancer overcomes this problem by allowing anonymous requests/submissions without sacrificing the accountability. The idea behind is a subtle linkability: if a worker submits twice to a task, anyone can link the submissions, or else he stays anonymous and unlinkable across tasks. To realize this delicate linkability, we put forward a novel cryptographic concept, i.e. the common-prefix-linkable anonymous authentication. We remark the new anonymous authentication scheme might be of independent interest. Finally, we implement our protocol for a common image annotation task and deploy it in a test net of Ethereum. The experiment results show the applicability of our protocol with the existing real-world blockchain.", "title": "" } ]
scidocsrr
ba3c0a81602f7e15905dc97d202cb105
A Novel Neural Sequence Model with Multiple Attentions for Word Sense Disambiguation
[ { "docid": "e7eb15df383c92fcd5a4edc7e27b5265", "text": "This article presents a new model for word sense disambiguation formulated in terms of evolutionary game theory, where each word to be disambiguated is represented as a node on a graph whose edges represent word relations and senses are represented as classes. The words simultaneously update their class membership preferences according to the senses that neighboring words are likely to choose. We use distributional information to weigh the influence that each word has on the decisions of the others and semantic similarity information to measure the strength of compatibility among the choices. With this information we can formulate the word sense disambiguation problem as a constraint satisfaction problem and solve it using tools derived from game theory, maintaining the textual coherence. The model is based on two ideas: Similar words should be assigned to similar classes and the meaning of a word does not depend on all the words in a text but just on some of them. The article provides an in-depth motivation of the idea of modeling the word sense disambiguation problem in terms of game theory, which is illustrated by an example. The conclusion presents an extensive analysis on the combination of similarity measures to use in the framework and a comparison with state-of-the-art systems. The results show that our model outperforms state-of-the-art algorithms and can be applied to different tasks and in different scenarios.", "title": "" }, { "docid": "f37623a4f7a1b7b328883ab016e1b285", "text": "Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder–decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-ofspeech tags, and syntactic dependency labels as input features to English↔German and English→Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An opensource implementation of our neural MT system is available1, as are sample files and configurations2.", "title": "" }, { "docid": "78a1ebceb57a90a15357390127c443b7", "text": "In this paper we present a clean, yet effective, model for word sense disambiguation. Our approach leverage a bidirectional long short-term memory network which is shared between all words. This enables the model to share statistical strength and to scale well with vocabulary size. The model is trained end-to-end, directly from the raw text to sense labels, and makes effective use of word order. We evaluate our approach on two standard datasets, using identical hyperparameter settings, which are in turn tuned on a third set of held out data. We employ no external resources (e.g. knowledge graphs, part-of-speech tagging, etc), language specific features, or hand crafted rules, but still achieve statistically equivalent results to the best state-of-the-art systems, that employ no such limitations.", "title": "" } ]
[ { "docid": "cbda3aafb8d8f76a8be24191e2fa7c54", "text": "With the rapid development of robot and other intelligent and autonomous agents, how a human could be influenced by a robot’s expressed mood when making decisions becomes a crucial question in human-robot interaction. In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human’s decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting. More specifically, we create an NLP model to generate sentences that adhere to a specific affective expression profile. We use these sentences for a humanoid robot as it plays a Stackelberg security game against a human. We investigate the behavioral model of the human player.", "title": "" }, { "docid": "219a90eb2fd03cd6cc5d89fda740d409", "text": "The general problem of computing poste rior probabilities in Bayesian networks is NP hard Cooper However e cient algorithms are often possible for particular applications by exploiting problem struc tures It is well understood that the key to the materialization of such a possibil ity is to make use of conditional indepen dence and work with factorizations of joint probabilities rather than joint probabilities themselves Di erent exact approaches can be characterized in terms of their choices of factorizations We propose a new approach which adopts a straightforward way for fac torizing joint probabilities In comparison with the clique tree propagation approach our approach is very simple It allows the pruning of irrelevant variables it accommo dates changes to the knowledge base more easily it is easier to implement More importantly it can be adapted to utilize both intercausal independence and condi tional independence in one uniform frame work On the other hand clique tree prop agation is better in terms of facilitating pre computations", "title": "" }, { "docid": "f6ac111d3ece47f9881a4f1b0ce6d4be", "text": "An Enterprise Framework (EF) is a software architecture. Such frameworks expose a rich set of semantics and modeling paradigms for developing and extending enterprise applications. EFs are, by design, the cornerstone of an organization’s systems development activities. EFs offer a streamlined and flexible alternative to traditional tools and applications which feature numerous point solutions integrated into complex and often inflexible environments. Enterprise Frameworks play an important role since they allow reuse of design knowledge and offer techniques for creating reference models and scalable architectures for enterprise integration. These models and architectures are sufficiently flexible and powerful to be used at multiple levels, e.g. from the integration of the planning systems of geographically distributed factories, to generate a global virtual factory, down to the monitoring and control system for a single production cell. These frameworks implement or enforce well-documented standards for component integration and collaboration. The architecture of an Enterprise framework provides for ready integration with new or existing components. It defines how these components must interact with the framework and how objects will collaborate. In addition, it defines how developers' work together to develop and extend enterprise applications based on the framework. Therefore, the goal of an Enterprise framework is to reduce complexity and lifecycle costs of enterprise systems, while ensuring flexibility.", "title": "" }, { "docid": "7ca1c9096c6176cb841ae7f0e7262cb7", "text": "“Industry 4.0” is recognized as the future of industrial production in which concepts as Smart Factory and Decentralized Decision Making are fundamental. This paper proposes a novel strategy to support decentralized decision, whilst identifying opportunities and challenges of Industry 4.0 contextualizing the potential that represents industrial digitalization and how technological advances can contribute for a new perspective on manufacturing production. It is analysed a set of barriers to the full implementation of Industry 4.0 vision, identifying areas in which decision support is vital. Then, for each of the identified areas, the authors propose a strategy, characterizing it together with the level of complexity that is involved in the different processes. The strategies proposed are derived from the needs of two of Industry 4.0 main characteristics: horizontal integration and vertical integration. For each case, decision approaches are proposed concerning the type of decision required (strategic, tactical, operational and real-time). Validation results are provided together with a discussion on the main challenges that might be an obstacle for a successful decision strategy.", "title": "" }, { "docid": "55bb962b4b3ce14f8d50983835bf3f73", "text": "This is a quantitative study on the performance of 3G mobile data offloading through WiFi networks. We recruited about 100 iPhone users from a metropolitan area and collected statistics on their WiFi connectivity during about a two and half week period in February 2010. We find that a user is in WiFi coverage for 70% of the time on average and the distributions of WiFi connection and disconnection times have a strong heavy-tail tendency with means around 2 hours and 40 minutes, respectively. Using the acquired traces, we run trace-driven simulation to measure offloading efficiency under diverse conditions e.g. traffic types, deadlines and WiFi deployment scenarios. The results indicate that if users can tolerate a two hour delay in data transfer (e.g, video and image up-loads), the network can offload 70% of the total 3G data traffic on average. We also develop a theoretical framework that permits an analytical study of the average performance of offloading. This tool is useful for network providers to obtain a rough estimate on the average performance of offloading for a given inputWiFi deployment condition.", "title": "" }, { "docid": "5289fc231c716e2ce9e051fb0652ce94", "text": "Noninvasive body contouring has become one of the fastest-growing areas of esthetic medicine. Many patients appear to prefer nonsurgical less-invasive procedures owing to the benefits of fewer side effects and shorter recovery times. Increasingly, 635-nm low-level laser therapy (LLLT) has been used in the treatment of a variety of medical conditions and has been shown to improve wound healing, reduce edema, and relieve acute pain. Within the past decade, LLLT has also emerged as a new modality for noninvasive body contouring. Research has shown that LLLT is effective in reducing overall body circumference measurements of specifically treated regions, including the hips, waist, thighs, and upper arms, with recent studies demonstrating the long-term effectiveness of results. The treatment is painless, and there appears to be no adverse events associated with LLLT. The mechanism of action of LLLT in body contouring is believed to stem from photoactivation of cytochrome c oxidase within hypertrophic adipocytes, which, in turn, affects intracellular secondary cascades, resulting in the formation of transitory pores within the adipocytes' membrane. The secondary cascades involved may include, but are not limited to, activation of cytosolic lipase and nitric oxide. Newly formed pores release intracellular lipids, which are further metabolized. Future studies need to fully outline the cellular and systemic effects of LLLT as well as determine optimal treatment protocols.", "title": "" }, { "docid": "3ce2dd736e7de17a6a34c5c29b582162", "text": "Query term weighting is a fundamental task in information retrieval and most popular term weighting schemes are primarily based on statistical analysis of term occurrences within the document collection. In this work we study how term weighting may benefit from syntactic analysis of the corpus. Focusing on community question answering (CQA) sites, we take into account the syntactic function of the terms within CQA texts as an important factor affecting their relative importance for retrieval. We analyze a large log of web queries that landed on Yahoo Answers site, showing a strong deviation between the tendencies of different document words to appear in a landing (click-through) query given their syntactic function. To this end, we propose a novel term weighting method that makes use of the syntactic information available for each query term occurrence in the document, on top of term occurrence statistics. The relative importance of each feature is learned via a learning to rank algorithm that utilizes a click-through query log. We examine the new weighting scheme using manual evaluation based on editorial data and using automatic evaluation over the query log. Our experimental results show consistent improvement in retrieval when syntactic information is taken into account.", "title": "" }, { "docid": "4b1ba99581d537fcbbe291d74b8f23f3", "text": "To put the concept of lean software development in context, it's useful to point out similarities and differences with agile software development. Agile development methods have generally expected system architecture and interaction design to occur outside the development team, or to occur in very small increments within the team. Because of this, agile practices often prove to be insufficient in addressing issues of solution design, user interaction design, and high-level system architecture. Increasingly, agile development practices are being thought of as good ways to organize software development, but insufficient ways to address design. Because design is fundamentally iterative and development is fundamentally iterative, the two disciplines suffer if they are not carefully integrated with each other. Because lean development lays out a set of principles that demand a whole-product, complete life-cycle, cross-functional approach, it's the more likely candidate to guide the combination of design, development, deployment, and validation into a single feedback loop focused on the discovery and delivery of value.", "title": "" }, { "docid": "858557b9e2efa6ea18a7094294bedb4f", "text": "Recent advances in technology have made our work easier compare to earlier times. Computer network is growing day by day but while discussing about the security of computers and networks it has always been a major concerns for organizations varying from smaller to larger enterprises. It is true that organizations are aware of the possible threats and attacks so they always prepare for the safer side but due to some loopholes attackers are able to make attacks. Intrusion detection is one of the major fields of research and researchers are trying to find new algorithms for detecting intrusions. Clustering techniques of data mining is an interested area of research for detecting possible intrusions and attacks. This paper presents a new clustering approach for anomaly intrusion detection by using the approach of K-medoids method of clustering and its certain modifications. The proposed algorithm is able to achieve high detection rate and overcomes the disadvantages of K-means algorithm.", "title": "" }, { "docid": "13c250fc46dfc45e9153dbb1dc184b70", "text": "This paper proposes Travel Prediction-based Data forwarding (TPD), tailored and optimized for multihop vehicle-to-vehicle communications. The previous schemes forward data packets mostly utilizing statistical information about road network traffic, which becomes much less accurate when vehicles travel in a light-traffic vehicular network. In this light-traffic vehicular network, highly dynamic vehicle mobility can introduce a large variance for the traffic statistics used in the data forwarding process. However, with the popularity of GPS navigation systems, vehicle trajectories become available and can be utilized to significantly reduce this uncertainty in the road traffic statistics. Our TPD takes advantage of these vehicle trajectories for a better data forwarding in light-traffic vehicular networks. Our idea is that with the trajectory information of vehicles in a target road network, a vehicle encounter graph is constructed to predict vehicle encounter events (i.e., timing for two vehicles to exchange data packets in communication range). With this encounter graph, TPD optimizes data forwarding process for minimal data delivery delay under a specific delivery ratio threshold. Through extensive simulations, we demonstrate that our TPD significantly outperforms existing legacy schemes in a variety of road network settings. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "08948dbae407fda2b1e9fb8c5231f796", "text": "Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.", "title": "" }, { "docid": "2c96d2609034163715df3b58c480b153", "text": "This paper presents a taxonomy of vulnerabilities created as a part of an effort to develop a framework for deriving verification and validation strategies to assess software security. This taxonomy is grounded in a process/object model of computation that establishes a relationship between software vulnerabilities, an executing process, and computer system resources such as memory, input/output, or cryptographic resources. That relationship promotes the concept that a software application is vulnerable to exploits when it permits the violation of (a) constraints imposed by computer system resources and/or (b) assumptions made about the usage of those resources. The taxonomy identifies and classifies these constraints and assumptions. The process/object model also serves as a basis for the classification scheme the taxonomy uses. That is, the computer system resources (or objects) identified in the process/object model form the categories and refined subcategories of the taxonomy. Vulnerabilities, which are expressed in the form of constraints and assumptions, are classified within the taxonomy according to these categories and subcategories. This taxonomy of vulnerabilities is novel and distinctively different from other taxonomies found in literature", "title": "" }, { "docid": "fb00acfa20d32b727241e27bd4e3fcab", "text": "Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD) is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, highquality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereoand multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.", "title": "" }, { "docid": "59bb0ad2e81f0e471338cdcd8b9f9727", "text": "We recently established conditions allowing for long-term expansion of epithelial organoids from intestine, recapitulating essential features of the in vivo tissue architecture. Here we apply this technology to study primary intestinal organoids of people suffering from cystic fibrosis, a disease caused by mutations in CFTR, encoding cystic fibrosis transmembrane conductance regulator. Forskolin induces rapid swelling of organoids derived from healthy controls or wild-type mice, but this effect is strongly reduced in organoids of subjects with cystic fibrosis or in mice carrying the Cftr F508del mutation and is absent in Cftr-deficient organoids. This pattern is phenocopied by CFTR-specific inhibitors. Forskolin-induced swelling of in vitro–expanded human control and cystic fibrosis organoids corresponds quantitatively with forskolin-induced anion currents in freshly excised ex vivo rectal biopsies. Function of the CFTR F508del mutant protein is restored by incubation at low temperature, as well as by CFTR-restoring compounds. This relatively simple and robust assay will facilitate diagnosis, functional studies, drug development and personalized medicine approaches in cystic fibrosis.", "title": "" }, { "docid": "0c3387ec7ed161d931bc08151e722d10", "text": "New updated! The latest book from a very famous author finally comes out. Book of the tower of hanoi myths and maths, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.", "title": "" }, { "docid": "7819d359e169ae18f9bb50f464e1233c", "text": "As large amount of data is generated in medical organizations (hospitals, medical centers) but this data is not properly used. There is a wealth of hidden information present in the datasets. The healthcare environment is still “information rich” but “knowledge poor”. There is a lack of effective analysis tools to discover hidden relationships and trends in data. Advanced data mining techniques can help remedy this situation. For this purpose we can use different data mining techniques. This research paper intends to provide a survey of current techniques of knowledge discovery in databases using data mining techniques that are in use in today’s medical research particularly in Heart Disease Prediction. This research has developed a prototype Heart Disease Prediction System (HDPS) using data mining techniques namely, Decision Trees, Naïve Bayes and Neural Network. This Heart disease prediction system can answer complex “what if” queries which traditional decision support systems cannot. Using medical profiles such as age, sex, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. It enables significant knowledge, e.g. patterns, relationships between medical factors related to heart disease, to be established.", "title": "" }, { "docid": "bd0e9da77d26116c629c7c8c259013f9", "text": "In the appstore-centric ecosystem, app developers have an urgent requirement to optimize their release strategy to maximize user adoption of their apps. To address this problem, we introduce an approach to assisting developers to select the proper release opportunity based on the purpose of the update and current condition of the app. Before that, we propose the update interval to characterize release patterns of apps, and find significance of the updates through empirical analysis. We mined the release-history data of 17,820 apps from 33 categories in Google Play, over a period of 105 days. With 41,028 releases identified from these apps, we reveal important characteristics of update intervals and how these factors can influence update effects. We suggest developers to synthetically consider app ranking, rating trend, and update purpose in addition to the timing of releasing an app version. We propose a Multinomial Naive Bayes model to help decide an optimal release opportunity to gain better user adoption.", "title": "" }, { "docid": "885b7e9fb662d938fc8264597fa070b8", "text": "Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words – natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task.", "title": "" }, { "docid": "d65aa05f6eb97907fe436ff50628a916", "text": "The process of stool transfer from healthy donors to the sick, known as faecal microbiota transplantation (FMT), has an ancient history. However, only recently researchers started investigating its applications in an evidence-based manner. Current knowledge of the microbiome, the concept of dysbiosis and results of preliminary research suggest that there is an association between gastrointestinal bacterial disruption and certain disorders. Researchers have studied the effects of FMT on various gastrointestinal and non-gastrointestinal diseases, but have been unable to precisely pinpoint specific bacterial strains responsible for the observed clinical improvement or futility of the process. The strongest available data support the efficacy of FMT in the treatment of recurrent Clostridium difficile infection with cure rates reported as high as 90% in clinical trials. The use of FMT in other conditions including inflammatory bowel disease, functional gastrointestinal disorders, obesity and metabolic syndrome is still controversial. Results from clinical studies are conflicting, which reflects the gap in our knowledge of the microbiome composition and function, and highlights the need for a more defined and personalised microbial isolation and transfer.", "title": "" }, { "docid": "5f6f0bd98fa03e4434fabe18642a48bc", "text": "Previous research suggests that women's genital arousal is an automatic response to sexual stimuli, whereas men's genital arousal is dependent upon stimulus features specific to their sexual interests. In this study, we tested the hypothesis that a nonhuman sexual stimulus would elicit a genital response in women but not in men. Eighteen heterosexual women and 18 heterosexual men viewed seven sexual film stimuli, six human films and one nonhuman primate film, while measurements of genital and subjective sexual arousal were recorded. Women showed small increases in genital arousal to the nonhuman stimulus and large increases in genital arousal to both human male and female stimuli. Men did not show any genital arousal to the nonhuman stimulus and demonstrated a category-specific pattern of arousal to the human stimuli that corresponded to their stated sexual orientation. These results suggest that stimulus features necessary to evoke genital arousal are much less specific in women than in men.", "title": "" } ]
scidocsrr
22a1afdf397fe5b27e8c8c9bcf9fe08c
Robust Subspace Clustering for Multi-View Data by Exploiting Correlation Consensus
[ { "docid": "50c3e7855f8a654571a62a094a86c4eb", "text": "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.", "title": "" } ]
[ { "docid": "32e5e753a611feaeb0c527851d9cc00f", "text": "This paper deals with impedance control of a rotary Series Elastic Actuator designed to assist in flexion/extension of the knee joint during physical therapy. The device includes a DC motor, a worm gear and a customized torsion spring. Since the elastic element is the most important component in the actuator design, a finite element analysis is used to meet the specific requirements of knee assistance. The proposed impedance control scheme comprises three cascade controllers: an inner-loop PI velocity controller, a PI torque controller, and an outer-loop PD position controller. The impedance controller’s performance is evaluated through the Frequency Response Function analysis, and experimental results demonstrate the ability of the system to reproduce impedance behavior within an amplitude and frequency range compatible for gait assistance.", "title": "" }, { "docid": "347ec97976cae28ca1a6ea0a3263ee3a", "text": "In this letter, a different method to design an ultra-wideband (UWB) monopole antenna with dual frequency bandstop performance is presented. The proposed antenna consists of a rectangular-ring radiating patch with a meander-line strip as a resonator protruded inside the ring, and a ground plane. The measured results reveal that the presented dual band-notched monopole antenna design exhibits an operating bandwidth (VSWR<2) form 2.8 GHz to 13.8 GHz with two notched bands, covering the 5.2/5.8 GHz, 3.5/5.5 GHz, and 4 GHz to suppress any interference from the wireless local area network (WLAN), worldwide interoperability microwave access (WiMAX) and C-band systems, respectively. Good return loss, antenna gain and radiation pattern characteristics are obtained in the frequency band of interest. The proposed monopole antenna configuration is simple, easy to fabricate and can be integrated into any UWB system. The antenna has a small dimension of 10×17 mm 2 .", "title": "" }, { "docid": "5ae974ffec58910ea3087aefabf343f8", "text": "With the ever-increasing use of multimedia contents through electronic commerce and on-line services, the problems associated with the protection of intellectual property, management of large database and indexation of content are becoming more prominent. Watermarking has been considered as efficient means to these problems. Although watermarking is a powerful tool, there are some issues with the use of it, such as the modification of the content and its security. With respect to this, identifying content itself based on its own features rather than watermarking can be an alternative solution to these problems. The aim of fingerprinting is to provide fast and reliable methods for content identification. In this paper, we present a new approach for image fingerprinting using the Radon transform to make the fingerprint robust against affine transformations. Since it is quite easy with modern computers to apply affine transformations to audio, image and video, there is an obvious necessity for affine transformation resilient fingerprinting. Experimental results show that the proposed fingerprints are highly robust against most signal processing transformations. Besides robustness, we also address other issues such as pairwise independence, database search efficiency and key dependence of the proposed method. r 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f9f1cf949093c41a84f3af854a2c4a8b", "text": "Modern TCP implementations are capable of very high point-to-point bandwidths. Delivered performance on the fastest networks is often limited by the sending and receiving hosts, rather than by the network hardware or the TCP protocol implementation itself. In this case, systems can achieve higher bandwidth by reducing host overheads through a variety of optimizations above and below the TCP protocol stack, given support from the network interface. This paper surveys the most important of these optimizations, and illustrates their effects quantitatively with empirical results from a an experimental network delivering up to two gigabits per second of point-to-point TCP bandwidth.", "title": "" }, { "docid": "3aaf7b95e0b3e7642c3065f3bc55692c", "text": "iv 1 Problem Definition 1 2 Origins from Two Extremes 3 2.1 The Origins of Agile Methods 3 2.2 The Origins of CMMI 5 3 Factors that Affect Perception 7 3.1 Misuse 7 3.2 Lack of Accurate Information 8 3.3 Terminology Difficulties 9 4 The Truth About CMMI 11 4.1 CMMI Is a Model, Not a Process Standard 11 4.2 Process Areas, Not Processes 13 4.3 SCAMPI Appraisals 14 5 The Truth About Agile 16 6 There Is Value in Both Paradigms 20 6.1 Challenges When Using Agile 20 6.2 Challenges When Using CMMI 22 6.3 Current Published Research 23 6.4 The Hope in Recent Trends 24 7 Problems Not Solved by CMMI nor Agile 27", "title": "" }, { "docid": "36f31dea196f2d7a74bc442f1c184024", "text": "The causes of Parkinson's disease (PD), the second most common neurodegenerative disorder, are still largely unknown. Current thinking is that major gene mutations cause only a small proportion of all cases and that in most cases, non-genetic factors play a part, probably in interaction with susceptibility genes. Numerous epidemiological studies have been done to identify such non-genetic risk factors, but most were small and methodologically limited. Larger, well-designed prospective cohort studies have only recently reached a stage at which they have enough incident patients and person-years of follow-up to investigate possible risk factors and their interactions. In this article, we review what is known about the prevalence, incidence, risk factors, and prognosis of PD from epidemiological studies.", "title": "" }, { "docid": "36b46a2bf4b46850f560c9586e91d27b", "text": "Promoting pro-environmental behaviour amongst urban dwellers is one of today's greatest sustainability challenges. The aim of this study is to test whether an information intervention, designed based on theories from environmental psychology and behavioural economics, can be effective in promoting recycling of food waste in an urban area. To this end we developed and evaluated an information leaflet, mainly guided by insights from nudging and community-based social marketing. The effect of the intervention was estimated through a natural field experiment in Hökarängen, a suburb of Stockholm city, Sweden, and was evaluated using a difference-in-difference analysis. The results indicate a statistically significant increase in food waste recycled compared to a control group in the research area. The data analysed was on the weight of food waste collected from sorting stations in the research area, and the collection period stretched for almost 2 years, allowing us to study the short- and long term effects of the intervention. Although the immediate positive effect of the leaflet seems to have attenuated over time, results show that there was a significant difference between the control and the treatment group, even 8 months after the leaflet was distributed. Insights from this study can be used to guide development of similar pro-environmental behaviour interventions for other urban areas in Sweden and abroad, improving chances of reaching environmental policy goals.", "title": "" }, { "docid": "6f26f4409d418fe69b1d43ec9b4f8b39", "text": "Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal. The database and emotion recognition models are available at http://ibug.doc.ic.ac.uk/resources/first-affect-wild-challenge .", "title": "" }, { "docid": "86a4a75135878f0cc7dc83d0742f5791", "text": "The past few years have seen an explosion of interest in the epigenetics of cancer. This has been a consequence of both the exciting coalescence of the chromatin and DNA methylation fields, and the realization that DNA methylation changes are involved in human malignancies. The ubiquity of DNA methylation changes has opened the way to a host of innovative diagnostic and therapeutic strategies. Recent advances attest to the great promise of DNA methylation markers as powerful future tools in the clinic.", "title": "" }, { "docid": "d1df6cd3949b924ec51a5f0fd3193806", "text": "This paper presents a new approach to C-space exploration and path planning for robotic manipulators using the structure named bur of free C-space. This structure builds upon the so-called bubble, which is a local volume of free C-space, easily computed using the distance information in the workspace. We show how the same distance information can be used to compute the bur that can reach substantially beyond the boundary of the bubble. It is shown how burs can be used to form a rapidly exploring bur tree (RBT): a space-filling tree that resembles RRT. Such a structure can easily be used within a suitably tailored path planning algorithm. Simulation study shows how the RBT-based algorithm outperforms the classical RRT-based method.", "title": "" }, { "docid": "823177883b5cef404ffe117c7b51f2ba", "text": "A new class of broadband arrays with frequency invariant beam pattern is proposed. By suitable substitutions, the beam pattern of a continuous sensor array can be regarded as the Fourier transform of its spatial and temporal parameters. For the design of practical discrete sensor arrays, it can be regarded as an approximation to the continuous sensor case and a simple design method is provided, which can be applied to linear, rectangular and cubic broadband arrays.", "title": "" }, { "docid": "85581f0e6db599dd914f6e84586d15c6", "text": "Automatic feature extraction in latent fingerprints is a challenging problem due to poor quality of most latents, such as unclear ridge structures, overlapped lines and letters, and overlapped fingerprints. We proposed a latent fingerprint enhancement algorithm which requires manually marked region of interest (ROI) and singular points. The core of the proposed enhancement algorithm is a novel orientation field estimation algorithm, which fits orientation field model to coarse orientation field estimated from skeleton outputted by a commercial fingerprint SDK. Experimental results on NIST SD27 latent fingerprint database indicate that by incorporating the proposed enhancement algorithm, the matching accuracy of the commercial matcher was significantly improved.", "title": "" }, { "docid": "986b23f5c2a9df55c2a8c915479a282a", "text": "Recurrent neural network language models (RNNLM) have recently demonstrated vast potential in modelling long-term dependencies for NLP problems, ranging from speech recognition to machine translation. In this work, we propose methods for conditioning RNNLMs on external side information, e.g., metadata such as keywords or document title. Our experiments show consistent improvements of RNNLMs using side information over the baselines for two different datasets and genres in two languages. Interestingly, we found that side information in a foreign language can be highly beneficial in modelling texts in another language, serving as a form of cross-lingual language modelling.", "title": "" }, { "docid": "d6d0a5d1ddffaefe6d2f0944e50b3b70", "text": "We present a generalization of the scalar importance function employed by Metropolis Light Transport (MLT) and related Markov chain rendering algorithms. Although MLT is known for its user-designable mutation rules, we demonstrate that its scalar contribution function is similarly programmable in an unbiased manner. Normally, MLT samples light paths with a tendency proportional to their brightness. For a range of scenes, we demonstrate that this importance function is undesirable and leads to poor sampling behaviour. Instead, we argue that simple user-designable importance functions can concentrate work in transport effects of interest and increase estimator efficiency. Unlike mutation rules, these functions are not encumbered with the calculation of transitional probabilities. We introduce alternative importance functions, which encourage the Markov chain to aggressively pursue sampling goals of interest to the user. In addition, we prove that these importance functions may adapt over the course of a render in an unbiased fashion. To that end, we introduce multi-stage MLT, a general rendering setting for creating such adaptive functions. This allows us to create a noise-sensitive MLT renderer whose importance function explicitly targets noise. Finally, we demonstrate that our techniques are compatible with existing Markov chain rendering algorithms and significantly improve their visual efficiency.", "title": "" }, { "docid": "d58f60013b507b286fcfc9f19304fea6", "text": "The outcome of patients suffering from spondyloarthritis is determined by chronic inflammation and new bone formation leading to ankylosis. The latter process manifests by new cartilage and bone formation leading to joint or spine fusion. This article discusses the main mechanisms of new bone formation in spondyloarthritis. It reviews the key molecules and concepts of new bone formation and ankylosis in animal models of disease and translates these findings to human disease. In addition, proposed biomarkers of new bone formation are evaluated and the translational current and future challenges are discussed with regards to new bone formation in spondyloarthritis.", "title": "" }, { "docid": "5c111a5a30f011e4f47fb9e2041644f9", "text": "Since the audio recapture can be used to assist audio splicing, it is important to identify whether a suspected audio recording is recaptured or not. However, few works on such detection have been reported. In this paper, we propose an method to detect the recaptured audio based on deep learning and we investigate two deep learning techniques, i.e., neural network with dropout method and stack auto-encoders (SAE). The waveform samples of audio frame is directly used as the input for the deep neural network. The experimental results show that error rate around 7.5% can be achieved, which indicates that our proposed method can successfully discriminate recaptured audio and original audio.", "title": "" }, { "docid": "36c73f8dd9940b2071ad55ae1dd83c27", "text": "Current music recommender systems rely on techniques like collaborative filtering on user-provided information in order to generate relevant recommendations based upon users’ music collections or listening habits. In this paper, we examine whether better recommendations can be obtained by taking into account the music preferences of the user’s social contacts. We assume that music is naturally diffused through the social network of its listeners, and that we can propagate automatic recommendations in the same way through the network. In order to test this statement, we developed a music recommender application called Starnet on a Social Networking Service. It generated recommendations based either on positive ratings of friends (social recommendations), positive ratings of others in the network (nonsocial recommendations), or not based on ratings (random recommendations). The user responses to each type of recommendation indicate that social recommendations are better than non-social recommendations, which are in turn better than random recommendations. Likewise, the discovery of novel and relevant music is more likely via social recommendations than non-social. Social shuffle recommendations enable people to discover music through a serendipitous process powered by human relationships and tastes, exploiting the user’s social network to share cultural experiences.", "title": "" }, { "docid": "7427616531c787431edc4580f9dcc95c", "text": "Effective collaboration between clients and development teams is vitally important to all Agile ISD methods, enabling the key benefit of Agile approaches which is the ability to react quickly to changing requirements. The process by which this collaboration develops is particularly consequential for short-term Agile projects where we cannot assume that there will be sufficient time for initial \"kinks\" to be worked out. We develop a process model, based on extended concepts of technology frames and technology frame disruption, to explore the extent to which client framing of software projects is resistant to change in a highly collaborative environment. We report on survey and interview data from fourteen Agile ISD project clients, and discuss continuing research. We find that novice clients' frames focus on business-facing technology characteristics, but do not focus on project considerations, most notably the need to understand the capabilities of the team, and the technical constraints of the technology.", "title": "" }, { "docid": "51b0b757c823a4a0fb73b3e408c55d0d", "text": "This paper considers the use of Machine Learning (ML) in medicine by focusing on the main problem that this computational approach has been aimed at solving or at least minimizing: uncertainty. To this aim, we point out how uncertainty is so ingrained in medicine that it biases also the representation of clinical phenomena, that is the very input of ML models, thus undermining the clinical significance of their output. Recognizing this can motivate both medical doctors, in taking more responsibility in the development and use of these decision aids, and the researchers, in pursuing different ways to assess the value of these systems. In so doing, both designers and users could take this intrinsic characteristic of medicine more seriously and consider alternative approaches that do not “sweep uncertainty under the rug” within an objectivist fiction, which everyone can come up by believing as true.", "title": "" }, { "docid": "646572f76cffd3ba225105d6647a588f", "text": "Context: Cyber-physical systems (CPSs) have emerged to be the next generation of engineered systems driving the so-called fourth industrial revolution. CPSs are becoming more complex, open and more prone to security threats, which urges security to be engineered systematically into CPSs. Model-Based Security Engineering (MBSE) could be a key means to tackle this challenge via security by design, abstraction, and", "title": "" } ]
scidocsrr
9d25261a8ffabfdd48094aaf871caf0e
Signed Network Embedding in Social Media
[ { "docid": "3c1f6ef650ce559f7e2d388347bf8e84", "text": "Relations between users on social media sites often reflect a mixture of positive (friendly) and negative (antagonistic) interactions. In contrast to the bulk of research on social networks that has focused almost exclusively on positive interpretations of links between people, we study how the interplay between positive and negative relationships affects the structure of on-line social networks. We connect our analyses to theories of signed networks from social psychology. We find that the classical theory of structural balance tends to capture certain common patterns of interaction, but that it is also at odds with some of the fundamental phenomena we observe --- particularly related to the evolving, directed nature of these on-line networks. We then develop an alternate theory of status that better explains the observed edge signs and provides insights into the underlying social mechanisms. Our work provides one of the first large-scale evaluations of theories of signed networks using on-line datasets, as well as providing a perspective for reasoning about social media sites.", "title": "" } ]
[ { "docid": "4a779f5e15cc60f131a77c69e09e54bc", "text": "We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods by using total variation regularization. We obtain rigorous convergence results and effective stopping criteria for the general procedure. The numerical results for denoising appear to give significant improvement over standard models, and preliminary results for deblurring/denoising are very encouraging.", "title": "" }, { "docid": "ccc3c2ee7a08eb239443d5773707d782", "text": "We introduce an iterative normalization and clustering method for single-cell gene expression data. The emerging technology of single-cell RNA-seq gives access to gene expression measurements for thousands of cells, allowing discovery and characterization of cell types. However, the data is confounded by technical variation emanating from experimental errors and cell type-specific biases. Current approaches perform a global normalization prior to analyzing biological signals, which does not resolve missing data or variation dependent on latent cell types. Our model is formulated as a hierarchical Bayesian mixture model with cell-specific scalings that aid the iterative normalization and clustering of cells, teasing apart technical variation from biological signals. We demonstrate that this approach is superior to global normalization followed by clustering. We show identifiability and weak convergence guarantees of our method and present a scalable Gibbs inference algorithm. This method improves cluster inference in both synthetic and real single-cell data compared with previous methods, and allows easy interpretation and recovery of the underlying structure and cell types.", "title": "" }, { "docid": "fdaf5546d430226721aa1840f92ba5af", "text": "The recent development of regulatory policies that permit the use of TV bands spectrum on a secondary basis has motivated discussion about coexistence of primary (e.g. TV broadcasts) and secondary users (e.g. WiFi users in TV spectrum). However, much less attention has been given to coexistence of different secondary wireless technologies in the TV white spaces. Lack of coordination between secondary networks may create severe interference situations, resulting in less efficient usage of the spectrum. In this paper, we consider two of the most prominent wireless technologies available today, namely Long Term Evolution (LTE), and WiFi, and address some problems that arise from their coexistence in the same band. We perform exhaustive system simulations and observe that WiFi is hampered much more significantly than LTE in coexistence scenarios. A simple coexistence scheme that reuses the concept of almost blank subframes in LTE is proposed, and it is observed that it can improve the WiFi throughput per user up to 50 times in the studied scenarios.", "title": "" }, { "docid": "80759a5c2e60b444ed96c9efd515cbdf", "text": "The Web of Things is an active research field which aims at promoting the easy access and handling of smart things' digital representations through the adoption of Web standards and technologies. While huge research and development efforts have been spent on lower level networks and software technologies, it has been recognized that little experience exists instead in modeling and building applications for the Web of Things. Although several works have proposed Representational State Transfer (REST) inspired approaches for the Web of Things, a main limitation is that poor support is provided to web developers for speeding up the development of Web of Things applications while taking full advantage of REST benefits. In this paper, we propose a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them. The framework consists of a Web Resource information model, a middleware, and tools for developing and publishing smart things' digital representations on the Web. We discuss the framework compliance with REST guidelines and its major implementation choices. Finally, we report on our test activities carried out within the SmartSantander European Project to evaluate the use and proficiency of our framework in a smart city scenario.", "title": "" }, { "docid": "2caea7f13980ea4a48fb8e8bb71842f1", "text": "Internet of Things, commonly known as IoT is a promising area in technology that is growing day by day. It is a concept whereby devices connect with each other or to living things. Internet of Things has shown its great benefits in today’s life. Agriculture is one amongst the sectors which contributes a lot to the economy of Mauritius and to get quality products, proper irrigation has to be performed. Hence proper water management is a must because Mauritius is a tropical island that has gone through water crisis since the past few years. With the concept of Internet of Things and the power of the cloud, it is possible to use low cost devices to monitor and be informed about the status of an agricultural area in real time. Thus, this paper provides the design and implementation of a Smart Irrigation and Monitoring System which makes use of Microsoft Azure machine learning to process data received from sensors in the farm and weather forecasting data to better inform the farmers on the appropriate moment to start irrigation. The Smart Irrigation and Monitoring System is made up of sensors which collect data such as air humidity, air temperature, and most importantly soil moisture data. These data are used to monitor the air quality and water content of the soil. The raw data are transmitted to the", "title": "" }, { "docid": "4a6c2d388bb114751b2ce9c6df55beab", "text": "To support people trying to lose weight and stay healthy, more and more fitness apps have sprung up including the ability to track both calories intake and expenditure. Users of such apps are part of a wider \"quantified self\" movement and many opt-in to publicly share their logged data. In this paper, we use public food diaries of more than 4,000 long-term active MyFitnessPal users to study the characteristics of a (un-)successful diet. Concretely, we train a machine learning model to predict repeatedly being over or under self-set daily calories goals and then look at which features contribute to the model's prediction. Our findings include both expected results, such as the token \"mcdonalds\" or the category \"dessert\" being indicative for being over the calories goal, but also less obvious ones such as the difference between pork and poultry concerning dieting success, or the use of the \"quick added calories\" functionality being indicative of over-shooting calorie-wise. This study also hints at the feasibility of using such data for more in-depth data mining, e.g., looking at the interaction between consumed foods such as mixing protein- and carbohydrate-rich foods. To the best of our knowledge, this is the first systematic study of public food diaries.", "title": "" }, { "docid": "32a4c17a53643042a5c19180bffd7c21", "text": "Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a \"$1 recognizer\" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.", "title": "" }, { "docid": "748996944ebd52a7d82c5ca19b90656b", "text": "The experiment was conducted with three biofloc treatments and one control in triplicate in 500 L capacity indoor tanks. Biofloc tanks, filled with 350 L of water, were fed with sugarcane molasses (BFTS), tapioca flour (BFTT), wheat flour (BFTW) and clean water as control without biofloc and allowed to stand for 30 days. The postlarvae of Litopenaeus vannamei (Boone, 1931) with an Average body weight of 0.15 0.02 g were stocked at the rate of 130 PL m 2 and cultured for a period of 60 days fed with pelleted feed at the rate of 1.5% of biomass. The total suspended solids (TSS) level was maintained at around 500 mg L 1 in BFT tanks. The addition of carbohydrate significantly reduced the total ammoniaN (TAN), nitrite-N and nitrate-N in water and it significantly increased the total heterotrophic bacteria (THB) population in the biofloc treatments. There was a significant difference in the final average body weight (8.49 0.09 g) in the wheat flour treatment (BFTW) than those treatment and control group of the shrimp. Survival of the shrimps was not affected by the treatments and ranged between 82.02% and 90.3%. The proximate and chemical composition of biofloc and proximate composition of the shrimp was significantly different between the biofloc treatments and control. Tintinids, ciliates, copepods, cyanobacteria and nematodes were identified in all the biofloc treatments, nematodes being the most dominant group of organisms in the biofloc. It could be concluded that the use of wheat flour (BFTW) effectively enhanced the biofloc production and contributed towards better water quality which resulted in higher production of shrimp.", "title": "" }, { "docid": "f6d81abce568dd297f0bf0f0b6fff837", "text": "Recently, there emerged revived interests of designing automatic programs (e.g., using genetic/evolutionary algorithms) to optimize the structure of Convolutional Neural Networks (CNNs) [1] for a specific task. The challenge in designing such programs lies in how to balance between large search space of the network structures and high computational costs. Existing works either impose strong restrictions on the search space or use enormous computing resources. In this paper, we study how to design a genetic programming approach for optimizing the structure of a CNN for a given task under limited computational resources yet without imposing strong restrictions on the search space. To reduce the computational costs, we propose two general strategies that are observed to be helpful: (i) aggressively selecting strongest individuals for survival and reproduction, and killing weaker individuals at a very early age; (ii) increasing mutation frequency to encourage diversity and faster evolution. The combined strategy with additional optimization techniques allows us to explore a large search space but with affordable computational costs. Our results on standard benchmark datasets (MNIST [1], SVHN [2], CIFAR-10 [3], CIFAR-100 [3]) are competitive to similar approaches with significantly reduced computational costs.", "title": "" }, { "docid": "14827ea435d82e4bfe481713af45afed", "text": "This paper introduces a model-based approach to estimating longitudinal wheel slip and detecting immobilized conditions of autonomous mobile robots operating on outdoor terrain. A novel tire traction/braking model is presented and used to calculate vehicle dynamic forces in an extended Kalman filter framework. Estimates of external forces and robot velocity are derived using measurements from wheel encoders, inertial measurement unit, and GPS. Weak constraints are used to constrain the evolution of the resistive force estimate based upon physical reasoning. Experimental results show the technique accurately and rapidly detects robot immobilization conditions while providing estimates of the robot's velocity during normal driving. Immobilization detection is shown to be robust to uncertainty in tire model parameters. Accurate immobilization detection is demonstrated in the absence of GPS, indicating the algorithm is applicable for both terrestrial applications and space robotics.", "title": "" }, { "docid": "c095de72c7cffc19f3b4302c2045525c", "text": "Reinforcement learning schemes perform direct on-line search in control space. This makes them appropriate for modifying control rules to obtain improvements in the performance of a system. The effectiveness of a reinforcement learning strategy is studied here through the training of a learning classz$er system (LCS) that controls the movement of an autonomous vehicle in simulated paths including left and right turns. The LCS comprises a set of conditionaction rules (classifiers) that compete to control the system and evolve by means of a genetic algorithm (GA). Evolution and operation of classifiers depend upon an appropriate credit assignment mechanism based on reinforcement learning. Different design options and the role of various parameters have been investigated experimentally. The performance of vehicle movement under the proposed evolutionary approach is superior compared with that of other (neural) approaches based on reinforcement learning that have been applied previously to the same benchmark problem.", "title": "" }, { "docid": "50e3052f48fccda7e404f13f60f14048", "text": "BACKGROUND\nMany procedures have been described for surgical treatment of symptomatic hallux rigidus. Dorsal cheilectomy of the metatarsophalangeal joint combined with a dorsal-based closing wedge osteotomy of the proximal phalanx (i.e., Moberg procedure) has been described as an effective procedure. For patients with hallux rigidus and clinically significant hallux valgus interphalangeus, the authors previously described a dorsal cheilectomy combined with a biplanar closing wedge osteotomy of the proximal phalanx, combining a Moberg osteotomy with an Akin osteotomy. The purpose of this study was to describe the clinical results of this procedure.\n\n\nMETHODS\nThis article is a retrospective review of prospectively gathered data that reports the clinical and radiographic results of dorsal cheilectomy combined with a biplanar oblique closing wedge proximal phalanx osteotomy (i.e., Moberg-Akin procedure) for patients with symptomatic hallux rigidus and hallux valgus interphalangeus. Consecutive patients were followed and evaluated for clinical and radiographic healing, satisfaction, and ultimate need for additional procedure(s). Thirty-five feet in 34 patients underwent the procedure.\n\n\nRESULTS\nAll osteotomies healed. At an average of 22.5 months of follow-up, 90% of patients reported good or excellent results, with pain relief, improved function, and fewer shoe wear limitations following this procedure. Hallux valgus and hallux interphalangeal angles were radiographically improved. Other than one patient who requested hardware removal, no patients required additional surgical procedures.\n\n\nCONCLUSIONS\nDorsal cheilectomy combined with a Moberg-Akin procedure was an effective and durable procedure with minimal morbidity in patients with hallux rigidus combined with hallux valgus interphalangeus.", "title": "" }, { "docid": "463768d109b05a48d95697b82f16574e", "text": "Penile squamous cell carcinoma (SCC) with considerable urethral extension is uncommon and difficult to manage. It often is resistant to less invasive and nonsurgical treatments and frequently results in partial or total penectomy, which can lead to cosmetic disfigurement, functional issues, and psychological distress. We report a case of penile SCC in situ with considerable urethral extension with a focus of cells suspicious for moderately well-differentiated and invasive SCC that was treated with Mohs micrographic surgery (MMS). A review of the literature on penile tumors treated with MMS also is provided.", "title": "" }, { "docid": "1decfffb283be978ff7c22e69f28cecc", "text": "Music Information Retrieval (MIR) is an interdisciplinary research area that has grown out of the need to manage burgeoning collections of music in digital form. Its diverse disciplinary communities, exemplified by the recently established ISMIR conference series, have yet to articulate a common research agenda or agree on methodological principles and metrics of success. In order for MIR to succeed, researchers need to work with real user communities and develop research resources such as reference music collections , so that the wide variety of techniques being developed in MIR can be meaningfully compared with one another. Out of these efforts, a common MIR practice can emerge.", "title": "" }, { "docid": "5a379e488c0ff86eae7815442d251c03", "text": "Countless studies showed that [60]fullerene (C(60)) and derivatives could have many potential biomedical applications. However, while several independent research groups showed that C(60) has no acute or sub-acute toxicity in various experimental models, more than 25 years after its discovery the in vivo fate and the chronic effects of this fullerene remain unknown. If the potential of C(60) and derivatives in the biomedical field have to be fulfilled these issues must be addressed. Here we show that oral administration of C(60) dissolved in olive oil (0.8 mg/ml) at reiterated doses (1.7 mg/kg of body weight) to rats not only does not entail chronic toxicity but it almost doubles their lifespan. The effects of C(60)-olive oil solutions in an experimental model of CCl(4) intoxication in rat strongly suggest that the effect on lifespan is mainly due to the attenuation of age-associated increases in oxidative stress. Pharmacokinetic studies show that dissolved C(60) is absorbed by the gastro-intestinal tract and eliminated in a few tens of hours. These results of importance in the fields of medicine and toxicology should open the way for the many possible -and waited for- biomedical applications of C(60) including cancer therapy, neurodegenerative disorders, and ageing.", "title": "" }, { "docid": "ce29e17a4fb9c67676fb534e58e2e20d", "text": "OBJECTIVE\nTo examine the association between frequency of assisting with home meal preparation and fruit and vegetable preference and self-efficacy for making healthier food choices among grade 5 children in Alberta, Canada.\n\n\nDESIGN\nA cross-sectional survey design was used. Children were asked how often they helped prepare food at home and rated their preference for twelve fruits and vegetables on a 3-point Likert-type scale. Self-efficacy was measured with six items on a 4-point Likert-type scale asking children their level of confidence in selecting and eating healthy foods at home and at school.\n\n\nSETTING\nSchools (n =151) located in Alberta, Canada.\n\n\nSUBJECTS\nGrade 5 students (n = 3398).\n\n\nRESULTS\nA large majority (83-93 %) of the study children reported helping in home meal preparation at least once monthly. Higher frequency of helping prepare and cook food at home was associated with higher fruit and vegetable preference and with higher self-efficacy for selecting and eating healthy foods.\n\n\nCONCLUSIONS\nEncouraging children to be more involved in home meal preparation could be an effective health promotion strategy. These findings suggest that the incorporation of activities teaching children how to prepare simple and healthy meals in health promotion programmes could potentially lead to improvement in dietary habits.", "title": "" }, { "docid": "9f34152d5dd13619d889b9f6e3dfd5c3", "text": "Nichols, M. (2003). A theory for eLearning. Educational Technology & Society, 6(2), 1-10, Available at http://ifets.ieee.org/periodical/6-2/1.html ISSN 1436-4522. © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at kinshuk@massey.ac.nz. A theory for eLearning", "title": "" }, { "docid": "8d890dba24fc248ee37653aad471713f", "text": "We consider the problem of constructing a spanning tree for a graph G = (V,E) with n vertices whose maximal degree is the smallest among all spanning trees of G. This problem is easily shown to be NP-hard. We describe an iterative polynomial time approximation algorithm for this problem. This algorithm computes a spanning tree whose maximal degree is at most O(Δ + log n), where Δ is the degree of some optimal tree. The result is generalized to the case where only some vertices need to be connected (Steiner case) and to the case of directed graphs. It is then shown that our algorithm can be refined to produce a spanning tree of degree at most Δ + 1. Unless P = NP, this is the best bound achievable in polynomial time.", "title": "" }, { "docid": "94af221c857462b51e14f527010fccde", "text": "The immunology of the hygiene hypothesis of allergy is complex and involves the loss of cellular and humoral immunoregulatory pathways as a result of the adoption of a Western lifestyle and the disappearance of chronic infectious diseases. The influence of diet and reduced microbiome diversity now forms the foundation of scientific thinking on how the allergy epidemic occurred, although clear mechanistic insights into the process in humans are still lacking. Here we propose that barrier epithelial cells are heavily influenced by environmental factors and by microbiome-derived danger signals and metabolites, and thus act as important rheostats for immunoregulation, particularly during early postnatal development. Preventive strategies based on this new knowledge could exploit the diversity of the microbial world and the way humans react to it, and possibly restore old symbiotic relationships that have been lost in recent times, without causing disease or requiring a return to an unhygienic life style.", "title": "" }, { "docid": "6ea59490942d4748ce85c728573bdb9a", "text": "We present an accurate, efficient, and robust pose estimation system based on infrared LEDs. They are mounted on a target object and are observed by a camera that is equipped with an infrared-pass filter. The correspondences between LEDs and image detections are first determined using a combinatorial approach and then tracked using a constant-velocity model. The pose of the target object is estimated with a P3P algorithm and optimized by minimizing the reprojection error. Since the system works in the infrared spectrum, it is robust to cluttered environments and illumination changes. In a variety of experiments, we show that our system outperforms state-of-the-art approaches. Furthermore, we successfully apply our system to stabilize a quadrotor both indoors and outdoors under challenging conditions. We release our implementation as open-source software.", "title": "" } ]
scidocsrr
5a36bb2ebbc7ecd4dafc319388be63ff
MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving
[ { "docid": "d4fa5b9d4530b12a394c1e98ea2793b1", "text": "Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch-and-bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the chi2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.", "title": "" }, { "docid": "884121d37d1b16d7d74878fb6aff9cdb", "text": "All models are wrong, but some are useful. 2 Acknowledgements The authors of this guide would like to thank David Warde-Farley, Guillaume Alain and Caglar Gulcehre for their valuable feedback. Special thanks to Ethan Schoonover, creator of the Solarized color scheme, 1 whose colors were used for the figures. Feedback Your feedback is welcomed! We did our best to be as precise, informative and up to the point as possible, but should there be anything you feel might be an error or could be rephrased to be more precise or com-prehensible, please don't refrain from contacting us. Likewise, drop us a line if you think there is something that might fit this technical report and you would like us to discuss – we will make our best effort to update this document. Source code and animations The code used to generate this guide along with its figures is available on GitHub. 2 There the reader can also find an animated version of the figures.", "title": "" }, { "docid": "9d49b81400e1153be65417d638e2d7a3", "text": "We propose an approach to detect drivable road area in monocular images. It is a self-supervised approach which doesn't require any human road annotations on images to train the road detection algorithm. Our approach reduces human labeling effort and makes training scalable. We combine the best of both supervised and unsupervised methods in our approach. First, we automatically generate training road annotations for images using OpenStreetMap1, vehicle pose estimation sensors, and camera parameters. Next, we train a Convolutional Neural Network (CNN) for road detection using these annotations. We show that we are able to generate reasonably accurate training annotations in KITTI data-set [1]. We achieve state-of-the-art performance among the methods which do not require human annotation effort.", "title": "" } ]
[ { "docid": "a08f280e8b6eb58d34016e473d8f207a", "text": "Since sequential information plays an important role in modeling user behaviors, various sequential recommendation methods have been proposed. Methods based on Markov assumption are widely-used, but independently combine several most recent components. Recently, Recurrent Neural Networks (RNN) based methods have been successfully applied in several sequential modeling tasks. However, for real-world applications, these methods have difficulty in modeling the contextual information, which has been proved to be very important for behavior modeling. In this paper, we propose a novel model, named Context-Aware Recurrent Neural Networks (CA-RNN). Instead of using the constant input matrix and transition matrix in conventional RNN models, CA-RNN employs adaptive context-specific input matrices and adaptive context-specific transition matrices. The adaptive context-specific input matrices capture external situations where user behaviors happen, such as time, location, weather and so on. And the adaptive context-specific transition matrices capture how lengths of time intervals between adjacent behaviors in historical sequences affect the transition of global sequential features. Experimental results show that the proposed CA-RNN model yields significant improvements over state-of-the-art sequential recommendation methods and context-aware recommendation methods on two public datasets, i.e., the Taobao dataset and the Movielens-1M dataset.", "title": "" }, { "docid": "22d78ead5b703225b34f3c29a5ff07ad", "text": "Children's experiences in early childhood have significant lasting effects in their overall development and in the United States today the majority of young children spend considerable amounts of time in early childhood education settings. At the national level, there is an expressed concern about the low levels of student interest and success in science, technology, engineering, and mathematics (STEM). Bringing these two conversations together our research focuses on how young children of preschool age exhibit behaviors that we consider relevant in engineering. There is much to be explored in STEM education at such an early age, and in order to proceed we created an experimental observation protocol in which we identified various pre-engineering behaviors based on pilot observations, related literature and expert knowledge. This protocol is intended for use by preschool teachers and other professionals interested in studying engineering in the preschool classroom.", "title": "" }, { "docid": "f4df443de6ab0f50375f5b9e9461a27d", "text": "Deep neural perception and control networks have become key components of selfdriving vehicles. User acceptance is likely to benefit from easy-to-interpret visual and textual driving rationales which allow end-users to understand what triggered a particular behavior. Our approach involves two stages. In the first stage, we use visual (spatial) attention model to train a convolutional network end-to-end from images to steering angle commands. The attention model identifies image regions that potentially influence the network’s output. We then apply a causal filtering step to determine which input regions causally influence the vehicle’s control signal. In the second stage, we use a video-to-text language model to produce textual rationales that justify the model’s decision. The explanation generator uses a spatiotemporal attention mechanism, which is encouraged to match the controller’s attention.", "title": "" }, { "docid": "b4b6417ea0e1bc70c5faa50f8e2edf59", "text": "As secure processing as well as correct recovery of data getting more important, digital forensics gain more value each day. This paper investigates the digital forensics tools available on the market and analyzes each tool based on the database perspective. We present a survey of digital forensics tools that are either focused on data extraction from databases or assist in the process of database recovery. In our work, a detailed list of current database extraction software is provided. We demonstrate examples of database extractions executed on representative selections from among tools provided in the detailed list. We use a standard sample database with each tool for comparison purposes. Based on the execution results obtained, we compare these tools regarding different criteria such as runtime, static or live acquisition, and more.", "title": "" }, { "docid": "4c0c4b68cdfa1cf684eabfa20ee0b88b", "text": "Orthogonal Frequency Division Multiplexing (OFDM) is an attractive technique for wireless communication over frequency-selective fading channels. OFDM suffers from high Peak-to-Average Power Ratio (PAPR), which limits OFDM usage and reduces the efficiency of High Power Amplifier (HPA) or badly degrades BER. Many PAPR reduction techniques have been proposed in the literature. PAPR reduction techniques can be classified into blind receiver and non-blind receiver techniques. Active Constellation Extension (ACE) is one of the best blind receiver techniques. While, Partial Transmit Sequence (PTS) can work as blind / non-blind technique. PTS has a great PAPR reduction gain on the expense of increasing computational complexity. In this paper we combine PTS with ACE in four possible ways to be suitable for blind receiver applications with better performance than conventional methods (i.e. PTS and ACE). Results show that ACE-PTS scheme is the best among others. Expectedly, any hybrid technique has computational complexity larger than that of its components. However, ACE-PTS can be used to achieve the same performance as that of PTS or worthy better, with less number of subblocks (i.e. with less computational complexity) especially in low order modulation techniques (e.g. 4-QAM and 16-QAM). Results show that ACE-PTS with V=8 can perform similar to or better than PTS with V=10 in 16-QAM or 4-QAM, respectively, with 74% and 40.5% reduction in required numbers of additions and multiplications, respectively.", "title": "" }, { "docid": "70e82da805e5bb21d35d552afe68bc61", "text": "The consumption of pomegranate juice (PJ), a rich source of antioxidant polyphenols, has grown tremendously due to its reported health benefits. Pomegranate extracts, which incorporate the major antioxidants found in pomegranates, namely, ellagitannins, have been developed as botanical dietary supplements to provide an alternative convenient form for consuming the bioactive polyphenols found in PJ. Despite the commercial availability of pomegranate extract dietary supplements, there have been no studies evaluating their safety in human subjects. A pomegranate ellagitannin-enriched polyphenol extract (POMx) was prepared for dietary supplement use and evaluated in two pilot clinical studies. Study 1 was designed for safety assessment in 64 overweight individuals with increased waist size. The subjects consumed either one or two POMx capsules per day providing 710 mg (435 mg of gallic acid equivalents, GAEs) or 1420 mg (870 mg of GAEs) of extracts, respectively, and placebo (0 mg of GAEs). Safety laboratory determinations, including complete blood count (CBC), chemistry, and urinalysis, were made at each of three visits. Study 2 was designed for antioxidant activity assessment in 22 overweight subjects by administration of two POMx capsules per day providing 1000 mg (610 mg of GAEs) of extract versus baseline measurements. Measurement of antioxidant activity as evidenced by thiobarbituric acid reactive substances (TBARS) in plasma were measured before and after POMx supplementation. There was evidence of antioxidant activity through a significant reduction in TBARS linked with cardiovascular disease risk. There were no serious adverse events in any subject studied at either site. These studies demonstrate the safety of a pomegranate ellagitannin-enriched polyphenol dietary supplement in humans and provide evidence of antioxidant activity in humans.", "title": "" }, { "docid": "7eba71bb191a31bd87cd9d2678a7b860", "text": "In winter, rainbow smelt (Osmerus mordax) accumulate glycerol and produce an antifreeze protein (AFP), which both contribute to freeze resistance. The role of differential gene expression in the seasonal pattern of these adaptations was investigated. First, cDNAs encoding smelt and Atlantic salmon (Salmo salar) phosphoenolpyruvate carboxykinase (PEPCK) and smelt glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were cloned so that all sequences required for expression analysis would be available. Using quantitative PCR, expression of beta actin in rainbow smelt liver was compared with that of GAPDH in order to determine its validity as a reference gene. Then, levels of glycerol-3-phosphate dehydrogenase (GPDH), PEPCK, and AFP relative to beta actin were measured in smelt liver over a fall-winter-spring interval. Levels of GPDH mRNA increased in the fall just before plasma glycerol accumulation, implying a driving role in glycerol synthesis. GPDH mRNA levels then declined during winter, well in advance of serum glycerol, suggesting the possibility of GPDH enzyme or glycerol conservation in smelt during the winter months. PEPCK mRNA levels rose in parallel with serum glycerol in the fall, consistent with an increasing requirement for amino acids as metabolic precursors, remained elevated for much of the winter, and then declined in advance of the decline in plasma glycerol. AFP mRNA was elevated at the onset of fall sampling in October and remained elevated until April, implying separate regulation from GPDH and PEPCK. Thus, winter freezing point depression in smelt appears to result from a seasonal cycle of GPDH gene expression, with an ensuing increase in the expression of PEPCK, and a similar but independent cycle of AFP gene expression.", "title": "" }, { "docid": "86f080bd29f8c8af593ea4120db63991", "text": "As the local and national clamor for foreign energy independent United States continues to grow unabated; renewable energy has been receiving increased focus and it’s widely believed that it’s not only the answer to ever increasing demand for energy in this country, but also the environmentally friendly means of meeting such demand. During the spring of 2010, I was involved with a 5KW solar power system design project; the project involved designing and building solar panels and associated accessories like the solar array mounts and Solar Inverter system. One of the key issues we ran into during the initial stage of the project was how to select efficient solar cells for panel building at a reasonable cost. While we were able to purchase good solar cells within our allocated budget, the issue of design for efficiency was not fully understood , not just in the contest of solar cells performance , but also in the overall system efficiency of the whole solar power system, hence the door was opened for this thesis. My thesis explored and expanded beyond the scope of the aforementioned project to research different avenues for improving the efficiency of solar photovoltaic power system from the solar cell level to the solar array mounting, array tracking and DC-AC inversion system techniques.", "title": "" }, { "docid": "3d5ab2c686c11527296537b4c8396ae2", "text": "This study investigated writing beliefs, self-regulatory behaviors, and epistemology beliefs of preservice teachers in academic writing tasks. Students completed self-report measures of selfregulation, epistemology, and beliefs about writing. Both knowledge and regulation of cognition were positively related to writing enjoyment, and knowledge of cognition was negatively related to beliefs of ability as a fixed entity. Enjoyment of writing was related to learnability and selfassessment. It may be that students who are more self-regulated during writing also believe they can learn to improve their writing skills. It may be, however, that students who believe writing is learnable will exert the effort to self-regulate during writing. Student beliefs and feelings about learning and writing play an important and complex role in their self-regulation behaviors. Suggestions for instruction are included, and continued research of students’ beliefs and selfregulation in naturalistic contexts is recommended.", "title": "" }, { "docid": "0ec337f7af66ede2a97ade80ce27c131", "text": "The processing time required by a cryptographic primitive implemented in hardware is an important metric for its performance but it has not received much attention in recent publications on lightweight cryptography. Nevertheless, there are important applications for cost effective low-latency encryption. As the first step in the field, this paper explores the lowlatency behavior of hardware implementations of a set of block ciphers. The latency of the implementations is investigated as well as the trade-offs with other metrics such as circuit area, time-area product, power, and energy consumption. The obtained results are related back to the properties of the underlying cipher algorithm and, as it turns out, the number of rounds, their complexity, and the similarity of encryption and decryption procedures have a strong impact on the results. We provide a qualitative description and conclude with a set of recommendations for aspiring low-latency block cipher designers.", "title": "" }, { "docid": "ef640dfcbed4b93413b03cd5c2ec3859", "text": "MaxStream is a federated stream processing system that seamlessly integrates multiple autonomous and heterogeneous Stream Processing Engines (SPEs) and databases. In this paper, we propose to demonstrate the key features of MaxStream using two application scenarios, namely the Sales Map & Spikes business monitoring scenario and the Linear Road Benchmark, each with a different set of requirements. More specifically, we will show how the MaxStream Federator can translate and forward the application queries to two different commercial SPEs (Coral8 and StreamBase), as well as how it does so under various persistency requirements.", "title": "" }, { "docid": "5c9e4a243ee97eb2b178cdaec44d3add", "text": "A 320-detector-row multislice computed tomography (320-MSCT) scanner can acquire a volume data set covering a maximum range of 16 cm and can generate axial images 0.5-mm thick at 0.5-mm intervals. Three-dimensional (3D) images reconstructed from the thin axial slices include multiplanar reconstruction and 3D-CT. Single-phase 3D images are reconstructed from 0.175-s data, and multiphase 3D images are created in 29 phases at intervals of 0.1 s. Continuous replay of these 3D images produces four-dimensional moving images. In order to determine the feasibility of the morphologic and kinematic analyses of swallowing using 320-MSCT, single-phase volume scanning was performed on three patients and multiphase volume scanning was performed on one healthy volunteer. The single-phase 3D images clearly and accurately showed the structures involved in swallowing, and the multiphase 3D images were able to show the oral stage to the early esophageal stage of swallowing, allowing a kinematic analysis of swallowing. We developed a reclining chair that allows scanning to be performed with the subject in a semisitting position, which makes swallowing evaluation by 320-MSCT applicable not only to research on healthy swallowing but also to the clinical examination of dysphagia patients.", "title": "" }, { "docid": "bc28f28d21605990854ac9649d244413", "text": "Mobile devices can provide people with contextual information. This information may benefit a primary activity, assuming it is easily accessible. In this paper, we present DisplaySkin, a pose-aware device with a flexible display circling the wrist. DisplaySkin creates a kinematic model of a user's arm and uses it to place information in view, independent of body pose. In doing so, DisplaySkin aims to minimize the cost of accessing information without being intrusive. We evaluated our pose-aware display with a rotational pointing task, which was interrupted by a notification on DisplaySkin. Results show that a pose-aware display reduces the time required to respond to notifications on the wrist.", "title": "" }, { "docid": "e67b75e11ca6dd9b4e6c77b3cb92cceb", "text": "The incidence of malignant melanoma continues to increase worldwide. This cancer can strike at any age; it is one of the leading causes of loss of life in young persons. Since this cancer is visible on the skin, it is potentially detectable at a very early stage when it is curable. New developments have converged to make fully automatic early melanoma detection a real possibility. First, the advent of dermoscopy has enabled a dramatic boost in clinical diagnostic ability to the point that melanoma can be detected in the clinic at the very earliest stages. The global adoption of this technology has allowed accumulation of large collections of dermoscopy images of melanomas and benign lesions validated by histopathology. The development of advanced technologies in the areas of image processing and machine learning have given us the ability to allow distinction of malignant melanoma from the many benign mimics that require no biopsy. These new technologies should allow not only earlier detection of melanoma, but also reduction of the large number of needless and costly biopsy procedures. Although some of the new systems reported for these technologies have shown promise in preliminary trials, widespread implementation must await further technical progress in accuracy and reproducibility. In this paper, we provide an overview of computerized detection of melanoma in dermoscopy images. First, we discuss the various aspects of lesion segmentation. Then, we provide a brief overview of clinical feature segmentation. Finally, we discuss the classification stage where machine learning algorithms are applied to the attributes generated from the segmented features to predict the existence of melanoma.", "title": "" }, { "docid": "61d506905286fc3297622d1ac39534f0", "text": "In this paper we present the setup of an extensive Wizard-of-Oz environment used for the data collection and the development of a dialogue system. The envisioned Perception and Interaction Assistant will act as an independent dialogue partner. Passively observing the dialogue between the two human users with respect to a limited domain, the system should take the initiative and get meaningfully involved in the communication process when required by the conversational situation. The data collection described here involves audio and video data. We aim at building a rich multi-media data corpus to be used as a basis for our research which includes, inter alia, speech and gaze direction recognition, dialogue modelling and proactivity of the system. We further aspire to obtain data with emotional content to perfom research on emotion recognition, psychopysiological and usability analysis.", "title": "" }, { "docid": "5eccbb19af4a1b19551ce4c93c177c07", "text": "This paper presents the design and development of a microcontroller based heart rate monitor using fingertip sensor. The device uses the optical technology to detect the flow of blood through the finger and offers the advantage of portability over tape-based recording systems. The important feature of this research is the use of Discrete Fourier Transforms to analyse the ECG signal in order to measure the heart rate. Evaluation of the device on real signals shows accuracy in heart rate estimation, even under intense physical activity. The performance of HRM device was compared with ECG signal represented on an oscilloscope and manual pulse measurement of heartbeat, giving excellent results. Our proposed Heart Rate Measuring (HRM) device is economical and user friendly.", "title": "" }, { "docid": "681d0a6dcad967340cfb3ebe9cf7b779", "text": "We demonstrate an integrated buck dc-dc converter for multi-V/sub CC/ microprocessors. At nominal conditions, the converter produces a 0.9-V output from a 1.2-V input. The circuit was implemented in a 90-nm CMOS technology. By operating at high switching frequency of 100 to 317 MHz with four-phase topology and fast hysteretic control, we reduced inductor and capacitor sizes by three orders of magnitude compared to previously published dc-dc converters. This eliminated the need for the inductor magnetic core and enabled integration of the output decoupling capacitor on-chip. The converter achieves 80%-87% efficiency and 10% peak-to-peak output noise for a 0.3-A output current and 2.5-nF decoupling capacitance. A forward body bias of 500 mV applied to PMOS transistors in the bridge improves efficiency by 0.5%-1%.", "title": "" }, { "docid": "81cb6b35dcf083fea3973f4ee75a9006", "text": "We propose frameworks and algorithms for identifying communities in social networks that change over time. Communities are intuitively characterized as \"unusually densely knit\" subsets of a social network. This notion becomes more problematic if the social interactions change over time. Aggregating social networks over time can radically misrepresent the existing and changing community structure. Instead, we propose an optimization-based approach for modeling dynamic community structure. We prove that finding the most explanatory community structure is NP-hard and APX-hard, and propose algorithms based on dynamic programming, exhaustive search, maximum matching, and greedy heuristics. We demonstrate empirically that the heuristics trace developments of community structure accurately for several synthetic and real-world examples.", "title": "" }, { "docid": "6fbce446ceb871bc1d832ce8d06398af", "text": "The 250 kW TRIGA Mark II research reactor, Vienna, operates since 7 March 1962. The initial criticality was achieved with the first core loading of 57 fuel elements (FE) of same type (Aluminium clad fuel with 20% enrichment). Later on due to fuel consumption SST clad 20% enriched FE (s) have been added to compensate the reactor core burn-up. In 1975 high enriched (HEU) TRIGA fuel (FLIP fuel = Fuel Lifetime Improvement Program) was introduced into the core. The addition of this FLIP fuel resulted in the current completely mixed core. Therefore the current core of the TRIGA reactor Vienna is operating with a completely mixed core using three different types of fuels with two categories of enrichments. This makes the reactor physics calculations very complicated. To calculate the current core, a Monte Carlo based radiation transport computer code MCNP5 was employed to develop the current core of the TRIGA reactor. The present work presents the MCNP model of the current core and its validation through two experiments performed on the reactor. The experimental results of criticality and reactivity distribution experiments confirm the current core model. As the basis of this paper is based on the long-term cooperation with our colleague Dr. Matjaz Ravnik we therefore devote this paper in his memory.", "title": "" } ]
scidocsrr
da2bf71b58756589d119d1d2d59742c5
Cross-modal adaptation for RGB-D detection
[ { "docid": "37057dff785d5d373f3c4d7b60441276", "text": "We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.", "title": "" } ]
[ { "docid": "633be21ba8ae6b8882c8b4ac37969027", "text": "This paper presents a local search, based on a new neighborhood for the job-shop scheduling problem, and its application within a biased random-key genetic algorithm. Schedules are constructed by decoding the chromosome supplied by the genetic algorithm with a procedure that generates active schedules. After an initial schedule is obtained, a local search heuristic, based on an extension of the graphical method of Akers (1956), is applied to improve the solution. The new heuristic is tested on a set of 205 standard instances taken from the job-shop scheduling literature and compared with results obtained by other approaches. The new algorithm improved the best known solution values for 57 instances.", "title": "" }, { "docid": "42452d6df7372cdc9c2cdebd8f0475cb", "text": "This paper presents SgxPectre Attacks that exploit the recently disclosed CPU bugs to subvert the confidentiality and integrity of SGX enclaves. Particularly, we show that when branch prediction of the enclave code can be influenced by programs outside the enclave, the control flow of the enclave program can be temporarily altered to execute instructions that lead to observable cache-state changes. An adversary observing such changes can learn secrets inside the enclave memory or its internal registers, thus completely defeating the confidentiality guarantee offered by SGX. To demonstrate the practicality of our SgxPectre Attacks, we have systematically explored the possible attack vectors of branch target injection, approaches to win the race condition during enclave’s speculative execution, and techniques to automatically search for code patterns required for launching the attacks. Our study suggests that any enclave program could be vulnerable to SgxPectre Attacks since the desired code patterns are available in most SGX runtimes (e.g., Intel SGX SDK, Rust-SGX, and Graphene-SGX). Most importantly, we have applied SgxPectre Attacks to steal seal keys and attestation keys from Intel signed quoting enclaves. The seal key can be used to decrypt sealed storage outside the enclaves and forge valid sealed data; the attestation key can be used to forge attestation signatures. For these reasons, SgxPectre Attacks practically defeat SGX’s security protection. This paper also systematically evaluates Intel’s existing countermeasures against SgxPectre Attacks and discusses the security implications.", "title": "" }, { "docid": "5207b424fcaab6ed130ccf85008f1d46", "text": "We describe a component of a document analysis system for constructing ontologies for domain-specific web tables imported into Excel. This component automates extraction of the Wang Notation for the column header of a table. Using column-header specific rules for XY cutting we convert the geometric structure of the column header to a linear string denoting cell attributes and directions of cuts. The string representation is parsed by a context-free grammar and the parse tree is further processed to produce an abstract data-type representation (the Wang notation tree) of each column category. Experiments were carried out to evaluate this scheme on the original and edited column headers of Excel tables drawn from a collection of 200 used in our earlier work. The transformed headers were obtained by editing the original column headers to conform to the format targeted by our grammar. Forty-four original headers and their reformatted versions were submitted as input to our software system. Our grammar was able to parse and the extract Wang notation tree for all the edited headers, but for only four of the original headers. We suggest extensions to our table grammar that would enable processing a larger fraction of headers without manual editing.", "title": "" }, { "docid": "707a31c60288fc2873bb37544bb83edf", "text": "The game of Go has a long history in East Asian countries, but the field of Computer Go has yet to catch up to humans until the past couple of years. While the rules of Go are simple, the strategy and combinatorics of the game are immensely complex. Even within the past couple of years, new programs that rely on neural networks to evaluate board positions still explore many orders of magnitude more board positions per second than a professional can. We attempt to mimic human intuition in the game by creating a convolutional neural policy network which, without any sort of tree search, should play the game at or above the level of most humans. We introduce three structures and training methods that aim to create a strong Go player: non-rectangular convolutions, which will better learn the shapes on the board, supervised learning, training on a data set of 53,000 professional games, and reinforcement learning, training on games played between different versions of the network. Our network has already surpassed the skill level of intermediate amateurs simply using supervised learning. Further training and implementation of non-rectangular convolutions and reinforcement learning will likely increase this skill level much further.", "title": "" }, { "docid": "fc50b185323c45e3d562d24835e99803", "text": "The neuropeptide calcitonin gene-related peptide (CGRP) is implicated in the underlying pathology of migraine by promoting the development of a sensitized state of primary and secondary nociceptive neurons. The ability of CGRP to initiate and maintain peripheral and central sensitization is mediated by modulation of neuronal, glial, and immune cells in the trigeminal nociceptive signaling pathway. There is accumulating evidence to support a key role of CGRP in promoting cross excitation within the trigeminal ganglion that may help to explain the high co-morbidity of migraine with rhinosinusitis and temporomandibular joint disorder. In addition, there is emerging evidence that CGRP facilitates and sustains a hyperresponsive neuronal state in migraineurs mediated by reported risk factors such as stress and anxiety. In this review, the significant role of CGRP as a modulator of the trigeminal system will be discussed to provide a better understanding of the underlying pathology associated with the migraine phenotype.", "title": "" }, { "docid": "d59f6325233544b2deaaa60b8743312a", "text": "Printed documents are vulnerable to forgery through the latest technology development and it becomes extremely important. Most of the forgeries can be resulting loss of personal identity or ownership of a certain valuable object. This paper proposes novel authentication technique and schema for printed document authentication using watermarked QR (Quick Response) code. The technique is based Watermarked QR code generated with embedding logo belongs to the owner of the document which contain validation link, and the schema is checking the validation link of the printed document which linked to the web server and database server through internet connection by scanning it over camera phone and QR code reader, the result from this technique and schema is the validation can be done in real-time using smart phone such as smart phone based Android, Black Berry, and iOS. To get a good performance in extracting and validating printed document, it can be done by preparing in advance the validation link via internet connection to get the authentication of information hidden. Finally, this paper provide experimental results to demonstrate the authenticated of printed documents using watermarked QR code.", "title": "" }, { "docid": "3a98eec0c3c9d9b5e99f44c6ae932686", "text": "This letter proposes an ensemble neural network (Ensem-NN) for skeleton-based action recognition. The Ensem-NN is introduced based on the idea of ensemble learning, “two heads are better than one.” According to the property of skeleton sequences, we design one-dimensional convolution neural network with residual structure as Base-Net. From entirety to local, from focus to motion, we designed four different subnets based on the Base-Net to extract diverse features. The first subnet is a Two-stream Entirety Net , which performs on the entirety skeleton and explores both temporal and spatial features. The second is a Body-part Net, which can extract fine-grained spatial and temporal features. The third is an Attention Net, in which a channel-wised attention mechanism can learn important frames and feature channels. Frame-difference Net, as the fourth subnet, aims at exploring motion features. Finally, the four subnets are fused as one ensemble network. Experimental results show that the proposed Ensem-NN performs better than state-of-the-art methods on three widely used datasets.", "title": "" }, { "docid": "9876e4298f674a617f065f348417982a", "text": "On the basis of medical officers diagnosis, thirty three (N = 33) hypertensives, aged 35-65 years, from Govt. General Hospital, Pondicherry, were examined with four variables viz, systolic and diastolic blood pressure, pulse rate and body weight. The subjects were randomly assigned into three groups. The exp. group-I underwent selected yoga practices, exp. group-II received medical treatment by the physician of the said hospital and the control group did not participate in any of the treatment stimuli. Yoga imparted in the morning and in the evening with 1 hr/session. day-1 for a total period of 11-weeks. Medical treatment comprised drug intake every day for the whole experimental period. The result of pre-post test with ANCOVA revealed that both the treatment stimuli (i.e., yoga and drug) were effective in controlling the variables of hypertension.", "title": "" }, { "docid": "d96ab69ee3d31f9e8a5b447d7dc5f5fb", "text": "Heteroepitaxy between transition-metal dichalcogenide (TMDC) monolayers can fabricate atomically thin semiconductor heterojunctions without interfacial contamination, which are essential for next-generation electronics and optoelectronics. Here we report a controllable two-step chemical vapor deposition (CVD) process for lateral and vertical heteroepitaxy between monolayer WS2 and MoS2 on a c-cut sapphire substrate. Lateral and vertical heteroepitaxy can be selectively achieved by carefully controlling the growth of MoS2 monolayers that are used as two-dimensional (2D) seed crystals. Using hydrogen as a carrier gas, we synthesize ultraclean MoS2 monolayers, which enable lateral heteroepitaxial growth of monolayer WS2 from the MoS2 edges to create atomically coherent and sharp in-plane WS2/MoS2 heterojunctions. When no hydrogen is used, we obtain MoS2 monolayers decorated with small particles along the edges, inducing vertical heteroepitaxial growth of monolayer WS2 on top of the MoS2 to form vertical WS2/MoS2 heterojunctions. Our lateral and vertical atomic layer heteroepitaxy steered by seed defect engineering opens up a new route toward atomically controlled fabrication of 2D heterojunction architectures.", "title": "" }, { "docid": "75ed4cabbb53d4c75fda3a291ea0ab67", "text": "Optimization of energy consumption in future intelligent energy networks (or Smart Grids) will be based on grid-integrated near-real-time communications between various grid elements in generation, transmission, distribution and loads. This paper discusses some of the challenges and opportunities of communications research in the areas of smart grid and smart metering. In particular, we focus on some of the key communications challenges for realizing interoperable and future-proof smart grid/metering networks, smart grid security and privacy, and how some of the existing networking technologies can be applied to energy management. Finally, we also discuss the coordinated standardization efforts in Europe to harmonize communications standards and protocols.", "title": "" }, { "docid": "ebbb7b73c8c212a310bd0378f2ce39aa", "text": "\"Nail clipping is a simple technique for diagnosis of several nail unit dermatoses. This article summarizes the practical approach, utility, and histologic findings of a nail clipping in evaluation of onychomycosis, nail unit psoriasis, onychomatricoma, subungual hematoma, melanonychia, and nail cosmetics, and the forensic applications of this easily obtained specimen. It reviews important considerations in optimizing specimen collection, processing methods, and efficacy of special stains in several clinical contexts. Readers will develop a greater understanding and ease of application of this indispensable procedure in assessing nail unit dermatoses.\"", "title": "" }, { "docid": "fe759d1674a09bb5b48f7645fe2f2ced", "text": "Conceptualization (AC)", "title": "" }, { "docid": "f4639c2523687aa0d5bfdd840df9cfa4", "text": "This established database of manufacturers and thei r design specification, determined the condition and design of the vehicle based on the perception and preference of jeepney drivers and passengers, and compared the pa rts of the jeepney vehicle using Philippine National Standards and international sta ndards. The study revealed that most jeepney manufacturing firms have varied specificati ons with regard to the capacity, dimensions and weight of the vehicle and similar sp ecification on the parts and equipment of the jeepney vehicle. Most of the jeepney drivers an d passengers want to improve, change and standardize the parts of the jeepney vehicle. The p arts of jeepney vehicles have similar specifications compared to the 4 out of 5 mandatory PNS and 22 out 32 UNECE Regulations applicable for jeepney vehicle. It is concluded tha t t e jeepney vehicle can be standardized in terms of design, safety and environmental concerns.", "title": "" }, { "docid": "32b2cd6b63c6fc4de5b086772ef9d319", "text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "title": "" }, { "docid": "27eaf9f53dc88556a5d23f7cd72c196c", "text": "O management in service industry, expecially in Health Care is so crucial. There is no sector that the importance of planning could be underestimated, hospital management is one of them. It is the one that effects human life, for that reason forecasting should be done carefully. Forecasting is one of the first steps in planning, the success of the plans depends on the accuracy of the forecasts. In the service industries like the hospitals, there are many plans that depends on the forecasts, from capacity planning to aggregate planning, from layout decisions to the daily schedules. In this paper, many forecasting methods are studied and the accuracy of the forecasts are determined by the error indicators.", "title": "" }, { "docid": "43d2a1fcf73552a3983e862817d4eb92", "text": "Cross-lingual transfer has been shown to produce good results for dependency parsing of resource-poor languages. Although this avoids the need for a target language treebank, most approaches have still used large parallel corpora. However, parallel data is scarce for low-resource languages, and we report a new method that does not need parallel data. Our method learns syntactic word embeddings that generalise over the syntactic contexts of a bilingual vocabulary, and incorporates these into a neural network parser. We show empirical improvements over a baseline delexicalised parser on both the CoNLL and Universal Dependency Treebank datasets. We analyse the importance of the source languages, and show that combining multiple source-languages leads to a substantial improvement.", "title": "" }, { "docid": "2675b10d79ab7831550cd901ac81eec9", "text": "This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.", "title": "" }, { "docid": "0999a01e947019409c75150f85058728", "text": "We present a robot localization system using biologically inspired vision. Our system models two extensively studied human visual capabilities: (1) extracting the ldquogistrdquo of a scene to produce a coarse localization hypothesis and (2) refining it by locating salient landmark points in the scene. Gist is computed here as a holistic statistical signature of the image, thereby yielding abstract scene classification and layout. Saliency is computed as a measure of interest at every image location, which efficiently directs the time-consuming landmark-identification process toward the most likely candidate locations in the image. The gist features and salient regions are then further processed using a Monte Carlo localization algorithm to allow the robot to generate its position. We test the system in three different outdoor environments-building complex (38.4 m times 54.86 m area, 13 966 testing images), vegetation-filled park (82.3 m times 109.73 m area, 26 397 testing images), and open-field park (137.16 m times 178.31 m area, 34 711 testing images)-each with its own challenges. The system is able to localize, on average, within 0.98, 2.63, and 3.46 m, respectively, even with multiple kidnapped-robot instances.", "title": "" }, { "docid": "88b562679f217affe489b6914bbc342b", "text": "The measurement of functional gene abundance in diverse microbial communities often employs quantitative PCR (qPCR) with highly degenerate oligonucleotide primers. While degenerate PCR primers have been demonstrated to cause template-specific bias in PCR applications, the effect of such bias on qPCR has been less well explored. We used a set of diverse, full-length nifH gene standards to test the performance of several universal nifH primer sets in qPCR. We found significant template-specific bias in all but the PolF/PolR primer set. Template-specific bias caused more than 1000-fold mis-estimation of nifH gene copy number for three of the primer sets and one primer set resulted in more than 10,000-fold mis-estimation. Furthermore, such template-specific bias will cause qPCR estimates to vary in response to beta-diversity, thereby causing mis-estimation of changes in gene copy number. A reduction in bias was achieved by increasing the primer concentration. We conclude that degenerate primers should be evaluated across a range of templates, annealing temperatures, and primer concentrations to evaluate the potential for template-specific bias prior to their use in qPCR.", "title": "" }, { "docid": "97ac64bb4d06216253eacb17abfcb103", "text": "UIMA Ruta is a rule-based system designed for information extraction tasks, but it is also applicable for many natural language processing use cases. This demonstration gives an overview of the UIMA Ruta Workbench, which provides a development environment and tooling for the rule language. It was developed to ease every step in engineering rule-based applications. In addition to the full-featured rule editor, the user is supported by explanation of the rule execution, introspection in results, automatic validation and rule induction. Furthermore, the demonstration covers the usage and combination of arbitrary components for natural language processing.", "title": "" } ]
scidocsrr
a0b8aad2aa58819271d049f77893d0f8
Lejla Islami Assessing generational differences in susceptibility to Social Engineering attacks . A comparison between Millennial and Baby Boomer generations
[ { "docid": "9e3ad07ca89501d37812ea02861f9466", "text": "This study examines the evidence for the effectiveness of active learning. It defines the common forms of active learning most relevant for engineering faculty and critically examines the core element of each method. It is found that there is broad but uneven support for the core elements of active, collaborative, cooperative and problem-based learning.", "title": "" }, { "docid": "b57b06d861b5c4666095e356ee7e010b", "text": "Phishing is a form of electronic identity theft in which a combination of social engineering and Web site spoofing techniques is used to trick a user into revealing confidential information with economic value. The problem of social engineering attack is that there is no single solution to eliminate it completely, since it deals largely with the human factor. This is why implementing empirical experiments is very crucial in order to study and to analyze all malicious and deceiving phishing Web site attack techniques and strategies. In this paper, three different kinds of phishing experiment case studies have been conducted to shed some light into social engineering attacks, such as phone phishing and phishing Web site attacks for designing effective countermeasures and analyzing the efficiency of performing security awareness about phishing threats. Results and reactions to our experiments show the importance of conducting phishing training awareness for all users and doubling our efforts in developing phishing prevention techniques. Results also suggest that traditional standard security phishing factor indicators are not always effective for detecting phishing websites, and alternative intelligent phishing detection approaches are needed.", "title": "" } ]
[ { "docid": "3aaf9c81e8304bf540722d35c32d2046", "text": "To reduce page load times and bandwidth usage for mobile web browsing, middleboxes that compress page content are commonly used today. Unfortunately, this can hurt performance in many cases; via an extensive measurement study, we show that using middleboxes to facilitate compression results in up to 28% degradation in page load times when the client enjoys excellent wireless link conditions. We find that benefits from compression are primarily realized under bad network conditions. Guided by our study, we design and implement FlexiWeb, a framework that determines both when to use a middlebox and how to use it, based on the client's network conditions. First, FlexiWeb selectively fetches objects on a web page either directly from the source or via a middlebox, rather than fetching all objects via the middlebox. Second, instead of simply performing lossless compression of all content, FlexiWeb performs network-aware compression of images by selecting from among a range of content transformations. We implement and evaluate a prototype of FlexiWeb using Google's open source Chromium mobile browser and our implementation of a modified version of Google's open source compression proxy. Our extensive experiments show that, across a range of scenarios, FlexiWeb reduces page load times for mobile clients by 35-42% compared to the status quo.", "title": "" }, { "docid": "45eb2d7b74f485e9eeef584555e38316", "text": "With the increasing demand of massive multimodal data storage and organization, cross-modal retrieval based on hashing technique has drawn much attention nowadays. It takes the binary codes of one modality as the query to retrieve the relevant hashing codes of another modality. However, the existing binary constraint makes it difficult to find the optimal cross-modal hashing function. Most approaches choose to relax the constraint and perform thresholding strategy on the real-value representation instead of directly solving the original objective. In this paper, we first provide a concrete analysis about the effectiveness of multimodal networks in preserving the inter- and intra-modal consistency. Based on the analysis, we provide a so-called Deep Binary Reconstruction (DBRC) network that can directly learn the binary hashing codes in an unsupervised fashion. The superiority comes from a proposed simple but efficient activation function, named as Adaptive Tanh (ATanh). The ATanh function can adaptively learn the binary codes and be trained via back-propagation. Extensive experiments on three benchmark datasets demonstrate that DBRC outperforms several state-of-the-art methods in both image2text and text2image retrieval task.", "title": "" }, { "docid": "ffea50948eab00d47f603d24bcfc1bfd", "text": "A statistical pattern-recognition technique was applied to the classification of musical instrument tones within a taxonomic hierarchy. Perceptually salient acoustic features— related to the physical properties of source excitation and resonance structure—were measured from the output of an auditory model (the log-lag correlogram) for 1023 isolated tones over the full pitch ranges of 15 orchestral instruments. The data set included examples from the string (bowed and plucked), woodwind (single, double, and air reed), and brass families. Using 70%/30% splits between training and test data, maximum a posteriori classifiers were constructed based on Gaussian models arrived at through Fisher multiplediscriminant analysis. The classifiers distinguished transient from continuant tones with approximately 99% correct performance. Instrument families were identified with approximately 90% performance, and individual instruments were identified with an overall success rate of approximately 70%. These preliminary analyses compare favorably with human performance on the same task and demonstrate the utility of the hierarchical approach to classification.", "title": "" }, { "docid": "96bddddd86976f4dff0b984ef062704b", "text": "How do the structures of the medial temporal lobe contribute to memory? To address this question, we examine the neurophysiological correlates of both recognition and associative memory in the medial temporal lobe of humans, monkeys, and rats. These cross-species comparisons show that the patterns of mnemonic activity observed throughout the medial temporal lobe are largely conserved across species. Moreover, these findings show that neurons in each of the medial temporal lobe areas can perform both similar as well as distinctive mnemonic functions. In some cases, similar patterns of mnemonic activity are observed across all structures of the medial temporal lobe. In the majority of cases, however, the hippocampal formation and surrounding cortex signal mnemonic information in distinct, but complementary ways.", "title": "" }, { "docid": "639bbe7b640c514ab405601c7c3cfa01", "text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.", "title": "" }, { "docid": "3f09b82a9a9be064819c1d7b402b0031", "text": "Academic dishonesty is widespread within secondary and higher education. It can include unethical academic behaviors such as cheating, plagiarism, or unauthorized help. Researchers have investigated a number of individual and contextual factors in an effort to understand the phenomenon. In the last decade, there has been increasing interest in the role personality plays in explaining unethical academic behaviors. We used meta-analysis to estimate the relationship between each of the Big Five personality factors and academic dishonesty. Previous reviews have highlighted the role of neuroticism and extraversion as potential predictors of cheating behavior. However, our results indicate that conscientiousness and agreeableness are the strongest Big Five predictors, with both factors negatively related to academic dishonesty. We discuss the implications of our findings for both research and practice. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ce91f2a7b4ec328e07d22caaf5e35b51", "text": "http://www.jstor.org Interorganizational Collaboration and the Locus of Innovation: Networks of Learning in Biotechnology Author(s): Walter W. Powell, Kenneth W. Koput and Laurel Smith-Doerr Source: Administrative Science Quarterly, Vol. 41, No. 1 (Mar., 1996), pp. 116-145 Published by: on behalf of the Sage Publications, Inc. Johnson Graduate School of Management, Cornell University Stable URL: http://www.jstor.org/stable/2393988 Accessed: 18-12-2015 16:50 UTC", "title": "" }, { "docid": "c4d0084aab61645fc26e099115e1995c", "text": "Digital transformation often includes establishing big data analytics capabilities and poses considerable challenges for traditional manufacturing organizations, such as car companies. Successfully introducing big data analytics requires substantial organizational transformation and new organizational structures and business processes. Based on the three-stage evolution of big data analytics capabilities at AUDI, the full article provides recommendations for how traditional manufacturing organizations can successfully introduce big data analytics and master the related organizational transformations. Stage I: Advancing. In Stage I, AUDI’s sales and marketing department initiated data analytics projects. Commitment within the organization for data analytics grew slowly, and the strategic importance of the area was increasingly recognized. During this first stage, the IT department played a passive role, responding to the initiators of data analytics projects. The company’s digital innovation hub, however, laid the technology foundation for big data analytics during the Advancing stage. Stage II: Enabling. In Stage II, analytics competencies were built up not only in the digital innovation hub but also in the IT department. The IT department enabled big data analytics through isolated technology activities, sometimes taking on or insourcing tasks previously carried out by external consultancies or the digital innovation hub. Analytics services were developed through a more advanced technology infrastructure as well as analytics methods. Stage III: Leveraging. In the current Stage III, AUDI is leveraging the analytics competencies of the digital innovation hub and the IT department to centrally provide analytics-as-a-service. The IT department is now fully responsible for all technology tasks and is evolving to become a consulting partner for the other big data analytics stakeholders (sales and marketing department and digital innovation hub). In particular, digital services are enabled by leveraging the most valuable data source (i.e., operational car data).", "title": "" }, { "docid": "dc94e340ceb76a0c9fda47bac4be9920", "text": "Mobile health (mHealth) apps are an ideal tool for monitoring and tracking long-term health conditions; they are becoming incredibly popular despite posing risks to personal data privacy and security. In this paper, we propose a testing method for Android mHealth apps which is designed using a threat analysis, considering possible attack scenarios and vulnerabilities specific to the domain. To demonstrate the method, we have applied it to apps for managing hypertension and diabetes, discovering a number of serious vulnerabilities in the most popular applications. Here we summarise the results of that case study, and discuss the experience of using a testing method dedicated to the domain, rather than out-of-the-box Android security testing methods. We hope that details presented here will help design further, more automated, mHealth security testing tools and methods.", "title": "" }, { "docid": "98881e7174d495d42a0d68c0f0d7bf3b", "text": "The design process is often characterized by and realized through the iterative steps of evaluation and refinement. When the process is based on a single creative domain such as visual art or audio production, designers primarily take inspiration from work within their domain and refine it based on their own intuitions or feedback from an audience of experts from within the same domain. What happens, however, when the creative process involves more than one creative domain such as in a digital game? How should the different domains influence each other so that the final outcome achieves a harmonized and fruitful communication across domains? How can a computational process orchestrate the various computational creators of the corresponding domains so that the final game has the desired functional and aesthetic characteristics? To address these questions, this paper identifies game facet orchestration as the central challenge for artificial-intelligence-based game generation, discusses its dimensions, and reviews research in automated game generation that has aimed to tackle it. In particular, we identify the different creative facets of games, propose how orchestration can be facilitated in a top-down or bottom-up fashion, review indicative preliminary examples of orchestration, and conclude by discussing the open questions and challenges ahead.", "title": "" }, { "docid": "2b1858fc902102d06ea3fc0394b842bf", "text": "Recently, deep learning approaches with various network architectures have achieved significant performance improvement over existing iterative reconstruction methods in various imaging problems. However, it is still unclear why these deep learning architectures work for specific inverse problems. Moreover, unlike the usual evolution of signal processing theory around the classical theories, the link between the deep learning and the classical signal processing approaches such as wavelet, non-local processing, compressed sensing, etc, is still not well understood, which often makes signal processors in deep troubles. To address these issues, here we show that the long-searched-for missing link is the convolutional framelet for representing a signal by convolving local and non-local bases. The convolutional framelets was originally developed to generalize the recent theory of low-rank Hankel matrix approaches, and this paper significantly extends the idea to derive a deep neural network using multi-layer convolutional framelets with perfect reconstruction (PR) under rectified linear unit (ReLU) nonlinearity. Our analysis also shows that the popular deep network components such as residual block, redundant filter channels, and concatenated ReLU (CReLU) indeed help to achieve the PR, while the pooling and unpooling layers should be augmented with multi-resolution convolutional framelets to achieve PR condition. This discovery reveals the limitations of many existing deep learning architectures for inverse problems, and leads us to propose a novel deep convolutional framelets neural network. Using numerical experiments with sparse view x-ray computed tomography (CT), we demonstrated that our deep convolution framelets network shows consistent improvement over existing deep architectures at all downsampling factors. This discovery suggests that the success of deep learning is not from a magical power of a black-box, but rather comes from the power of a novel signal representation using non-local basis combined with data-driven local basis, which is indeed a natural extension of classical signal processing theory. Index Terms Convolutional framelets, deep learning, inverse problems, ReLU, perfect reconstruction condition Correspondence to: Jong Chul Ye, Ph.D KAIST Endowed Chair Professor Department of Bio and Brain Engineering Department of Mathematical Sciences Korea Advanced Institute of Science and Technology (KAIST) 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea Tel: +82-42-350-4320 Email: jong.ye@kaist.ac.kr J.C. Ye and Y. S. Han are with the Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea (e-mail: {jong.ye,hanyoseob}@kaist.ac.kr). ar X iv :1 70 7. 00 37 2v 1 [ st at .M L ] 3 J ul 2 01 7", "title": "" }, { "docid": "a9abef2213a7a24ec87aef11888d7854", "text": "Mechanical ventilation (MV) remains the cornerstone of acute respiratory distress syndrome (ARDS) management. It guarantees sufficient alveolar ventilation, high FiO2 concentration, and high positive end-expiratory pressure levels. However, experimental and clinical studies have accumulated, demonstrating that MV also contributes to the high mortality observed in patients with ARDS by creating ventilator-induced lung injury. Under these circumstances, extracorporeal lung support (ECLS) may be beneficial in two distinct clinical settings: to rescue patients from the high risk for death associated with severe hypoxemia, hypercapnia, or both not responding to maximized conventional MV, and to replace MV and minimize/abolish the harmful effects of ventilator-induced lung injury. High extracorporeal blood flow venovenous extracorporeal membrane oxygenation (ECMO) may therefore rescue the sickest patients with ARDS from the high risk for death associated with severe hypoxemia, hypercapnia, or both not responding to maximized conventional MV. Successful venovenous ECMO treatment in patients with extremely severe H1N1-associated ARDS and positive results of the CESAR trial have led to an exponential use of the technology in recent years. Alternatively, lower-flow extracorporeal CO2 removal devices may be used to reduce the intensity of MV (by reducing Vt from 6 to 3-4 ml/kg) and to minimize or even abolish the harmful effects of ventilator-induced lung injury if used as an alternative to conventional MV in nonintubated, nonsedated, and spontaneously breathing patients. Although conceptually very attractive, the use of ECLS in patients with ARDS remains controversial, and high-quality research is needed to further advance our knowledge in the field.", "title": "" }, { "docid": "21909d9d0a741061a65cf06e023f7aa2", "text": "Integrated magnetics is applied to replace the three-discrete transformers by a single core transformer in a three-phase LLC resonant converter. The magnetic circuit of the integrated transformer is analyzed to derive coupling factors between the phases; these coupling factors are intentionally minimized to realize the magnetic behavior of the three-discrete transformers, with the benefit of eliminating the dead space between them. However, in a practical design, the transformer parameters in a multiphase LLC resonant converter are never exactly identical among the phases, leading to unbalanced current sharing between the paralleled modules. In this regard, a current balancing method is proposed in this paper. The proposed method can improve the current sharing between the paralleled phases relying on a single balancing transformer, and its theory is based on Ampere’s law, by forcing the sum of the three resonant currents to zero. Theoretically, if an ideal balancing transformer has been utilized, it would impose the same effect of connecting the integrated transformer in a solid star connection. However, as the core permeability of the balancing transformer is finite, the unbalanced current cannot be completely suppressed. Nonetheless, utilizing a single balancing transformer has an advantage over the star connection, as it keeps the interleaving structure simple which allows for traditional phase-shedding techniques, and it can be a solution for the other multiphase topologies where realizing a star connection is not feasible. Along with the theoretical discussion, simulation and experimental results are also presented to evaluate the proposed method considering various sources of the unbalance such as a mismatch in: 1) resonant and magnetizing inductances; 2) resonant capacitors; 3) transistor on-resistances of the MOSFETS; and 4) propagation delay of the gate drivers.", "title": "" }, { "docid": "eb0ec729796a93f36d348e70e3fa9793", "text": "This paper proposes a novel approach to measure the object size using a regular digital camera. Nowadays, the remote object-size measurement is very crucial to many multimedia applications. Our proposed computer-aided automatic object-size measurement technique is based on a new depth-information extraction (range finding) scheme using a regular digital camera. The conventional range finders are often carried out using the passive method such as stereo cameras or the active method such as ultrasonic and infrared equipment. They either require the cumbersome set-up or deal with point targets only. The proposed approach requires only a digital camera with certain image processing techniques and relies on the basic principles of visible light. Experiments are conducted to evaluate the performance of our proposed new object-size measurement mechanism. The average error-percentage of this method is below 2%. It demonstrates the striking effectiveness of our proposed new method.", "title": "" }, { "docid": "705694c36d36ca6950740d754160f4bd", "text": "There is a growing concern that excessive and uncontrolled use of Facebook not only interferes with performance at school or work but also poses threats to physical and psychological well-being. The present research investigated how two individual difference variables--social anxiety and need for social assurance--affect problematic use of Facebook. Drawing on the basic premises of the social skill model of problematic Internet use, we hypothesized that social anxiety and need for social assurance would be positively correlated with problematic use of Facebook. Furthermore, it was predicted that need for social assurance would moderate the relationship between social anxiety and problematic use. A cross-sectional online survey was conducted with a college student sample in the United States (N=243) to test the proposed hypotheses. Results showed that both social anxiety and need for social assurance had a significant positive association with problematic use of Facebook. More importantly, the data demonstrated that need for social assurance served as a significant moderator of the relationship between social anxiety and problematic Facebook use. The positive association between social anxiety and problematic Facebook use was significant only for Facebook users with medium to high levels of need for social assurance but not for those with a low level of need for social assurance. Theoretical and practical implications of these findings were discussed.", "title": "" }, { "docid": "d053f8b728f94679cd73bc91193f0ba6", "text": "Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.", "title": "" }, { "docid": "3ae51aede5a7a551cfb2aecbc77a9ecb", "text": "We present the Crossfire attack -- a powerful attack that degrades and often cuts off network connections to a variety of selected server targets (e.g., servers of an enterprise, a city, a state, or a small country) by flooding only a few network links. In Crossfire, a small set of bots directs low intensity flows to a large number of publicly accessible servers. The concentration of these flows on the small set of carefully chosen links floods these links and effectively disconnects selected target servers from the Internet. The sources of the Crossfire attack are undetectable by any targeted servers, since they no longer receive any messages, and by network routers, since they receive only low-intensity, individual flows that are indistinguishable from legitimate flows. The attack persistence can be extended virtually indefinitely by changing the set of bots, publicly accessible servers, and target links while maintaining the same disconnection targets. We demonstrate the attack feasibility using Internet experiments, show its effects on a variety of chosen targets (e.g., servers of universities, US states, East and West Coasts of the US), and explore several countermeasures.", "title": "" }, { "docid": "31be3d5db7d49d1bfc58c81efec83bdc", "text": "Electromagnetic elements such as inductance are not used in switched-capacitor converters to convert electrical power. In contrast, capacitors are used for storing and transforming the electrical power in these new topologies. Lower volume, higher power density, and more integration ability are the most important features of these kinds of converters. In this paper, the most important switched-capacitor converters topologies, which have been developed in the last decade as new topologies in power electronics, are introduced, analyzed, and compared with each other, in brief. Finally, a 100 watt double-phase half-mode resonant converter is simulated to convert 48V dc to 24 V dc for light weight electrical vehicle applications. Low output voltage ripple (0.4%), and soft switching for all power diodes and switches are achieved under the worst-case conditions.", "title": "" }, { "docid": "7f49cb5934130fb04c02db03bd40e83d", "text": "BACKGROUND\nResearch literature on problematic smartphone use, or smartphone addiction, has proliferated. However, relationships with existing categories of psychopathology are not well defined. We discuss the concept of problematic smartphone use, including possible causal pathways to such use.\n\n\nMETHOD\nWe conducted a systematic review of the relationship between problematic use with psychopathology. Using scholarly bibliographic databases, we screened 117 total citations, resulting in 23 peer-reviewer papers examining statistical relations between standardized measures of problematic smartphone use/use severity and the severity of psychopathology.\n\n\nRESULTS\nMost papers examined problematic use in relation to depression, anxiety, chronic stress and/or low self-esteem. Across this literature, without statistically adjusting for other relevant variables, depression severity was consistently related to problematic smartphone use, demonstrating at least medium effect sizes. Anxiety was also consistently related to problem use, but with small effect sizes. Stress was somewhat consistently related, with small to medium effects. Self-esteem was inconsistently related, with small to medium effects when found. Statistically adjusting for other relevant variables yielded similar but somewhat smaller effects.\n\n\nLIMITATIONS\nWe only included correlational studies in our systematic review, but address the few relevant experimental studies also.\n\n\nCONCLUSIONS\nWe discuss causal explanations for relationships between problem smartphone use and psychopathology.", "title": "" }, { "docid": "651db77789c5f5edaa933534255c88d6", "text": "Abstract: Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work. Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work.", "title": "" } ]
scidocsrr
02ae400cf71e10f88cff90be23fe2e1c
Cooperative Communications for Cognitive Radio Networks
[ { "docid": "760133d80110b5fa42c4f29291b67949", "text": "In this work, we propose a game theoretic framework to analyze the behavior of cognitive radios for distributed adaptive channel allocation. We define two different objective functions for the spectrum sharing games, which capture the utility of selfish users and cooperative users, respectively. Based on the utility definition for cooperative users, we show that the channel allocation problem can be formulated as a potential game, and thus converges to a deterministic channel allocation Nash equilibrium point. Alternatively, a no-regret learning implementation is proposed for both scenarios and it is shown to have similar performance with the potential game when cooperation is enforced, but with a higher variability across users. The no-regret learning formulation is particularly useful to accommodate selfish users. Non-cooperative learning games have the advantage of a very low overhead for information exchange in the network. We show that cooperation based spectrum sharing etiquette improves the overall network performance at the expense of an increased overhead required for information exchange", "title": "" } ]
[ { "docid": "fb7d667938c3925d975d6d40a1a3a0c8", "text": "Oestrogen is an important determinant of breast cancer risk. Oestrogen-mimicking plant compounds called phytoestrogens can bind to oestrogen receptors and exert weak oestrogenic effects. Despite this activity, epidemiological studies suggest that the incidence of breast cancer is lower in countries where the intake of phytoestrogens is high, implying that these compounds may reduce breast cancer risk, and possibly have an impact on survival. Isoflavones and lignans are the most common phytoestrogens in the diet. In this article, we present findings from human observational and intervention studies related to both isoflavone and lignan exposure and breast cancer risk and survival. In addition, the clinical implications of these findings are examined in the light of a growing dietary supplement market. An increasing number of breast cancer patients seek to take supplements together with their standard treatment in the hope that these will either prevent recurrence or treat their menopausal symptoms. Observational studies suggest a protective effect of isoflavones on breast cancer risk and the case may be similar for increasing lignan consumption although evidence so far is inconsistent. In contrast, short-term intervention studies suggest a possible stimulatory effect on breast tissue raising concerns of possible adverse effects in breast cancer patients. However, owing to the dearth of human studies investigating effects on breast cancer recurrence and survival the role of phytoestrogens remains unclear. So far, not enough clear evidence exists on which to base guidelines for clinical use, although raising patient awareness of the uncertain effect of phytoestrogens is recommended.", "title": "" }, { "docid": "2b3c9f9c2c44d1b532f15e00e3853671", "text": "Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep convolutional neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progresses have been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transfered/compact convolutional filters and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic networks and stochastic depths networks. After that, we survey the evaluation matrix, main datasets used for evaluating the model performance and recent benchmarking efforts. Finally we conclude this paper, discuss remaining challenges and possible directions in this topic.", "title": "" }, { "docid": "8ac6160d8e6f7d425e2b2416626e5c2d", "text": "This report presents a concept design for the algorithms part of the STL and outlines the design of the supporting language mechanism. Both are radical simplifications of what was proposed in the C++0x draft. In particular, this design consists of only 41 concepts (including supporting concepts), does not require concept maps, and (perhaps most importantly) does not resemble template metaprogramming.", "title": "" }, { "docid": "2275a179ddba372694f92e41779c792d", "text": "Genetic algorithms play a significant role, as search techniques for handling complex spaces, in many fields such as artificial intelligence, engineering, robotics, etc. Genetic algorithms are based on the underlying genetic process in biological organisms and on the natural evolution principles of populations. A short description is given in this lecture, introducing their use for machine learning.", "title": "" }, { "docid": "ab3c0d4fecf7722a4b592473eb0de8dc", "text": "IOT( Internet of Things) relying on exchange of information through radio frequency identification(RFID), is emerging as one of important technologies that find its use in various applications ranging from healthcare, construction, hospitality to transportation sector and many more. This paper describes about IOT, concentrating its use in improving and securing future shopping. This paper shows how RFID technology makes life easier and secure and thus helpful in the future. KeywordsIOT,RFID, Intelligent shopping, RFID tags, RFID reader, Radio frequency", "title": "" }, { "docid": "05bc787d000ecf26c8185b084f8d2498", "text": "Recommendation system is a type of information filtering systems that recommend various objects from a vast variety and quantity of items which are of the user interest. This results in guiding an individual in personalized way to interesting or useful objects in a large space of possible options. Such systems also help many businesses to achieve more profits to sustain in their filed against their rivals. But looking at the amount of information which a business holds it becomes difficult to identify the items of user interest. Therefore personalization or user profiling is one of the challenging tasks that give access to user relevant information which can be used in solving the difficult task of classification and ranking items according to an individual’s interest. Profiling can be done in various ways such as supervised or unsupervised, individual or group profiling, distributive or and non-distributive profiling. Our focus in this paper will be on the dataset which we will use, we identify some interesting facts by using Weka Tool that can be used for recommending the items from dataset .Our aim is to present a novel technique to achieve user profiling in recommendation system. KeywordsMachine Learning; Information Retrieval; User Profiling", "title": "" }, { "docid": "7d6cd23ec44d7425b10ed086380bfc14", "text": "Objectives: To analysis different approaches for taxonomy construction to improve the knowledge classification, information retrieval and other data mining process. Findings: Taxonomies learning keep getting more important process for knowledge sharing about a domain. It is also used for application development such as knowledge searching, information retrieval. The taxonomy can be build manually but it is a complex process when the data are so large and it also produce some errors while taxonomy construction. There is various automatic taxonomy construction techniques are used to learn taxonomy based on keyword phrases, text corpus and from domain specific concepts etc. So it is required to build taxonomy with less human effort and with less error rate. This paper provides detailed information about those techniques. Methods: The methods such as lexico-syntatic pattern, semi supervised methods, graph based methods, ontoplus, TaxoLearn, Bayesian approach, two-step method, ontolearn and Automatic Taxonomy Construction from Text are analyzed in this paper. Application/Improvements: The findings of this work prove that the TaxoFinder approach provides better result than other approaches.", "title": "" }, { "docid": "87fa3f2317b53520839bc3cb90cf291b", "text": "In an experimental study of language switching and selection, bilinguals named numerals in either their first or second language unpredictably. Response latencies (RTs) on switch trials (where the response language changed from the previous trial) were slower than on nonswitch trials. As predicted, the language-switching cost was consistently larger when switching to the dominant L 1 from the weaker L2 than vice versa such that, on switch trials, L 1 responses were slower than in L 2. This “paradoxical” asymmetry in the cost of switching languages is explained in terms of differences in relative strength of the bilingual’s two languages and the involuntary persistence of the previous language set across an intended switch of language. Naming in the weaker language, L 2, requires active inhibition or suppression of the stronger competitor language, L 1; the inhibition persists into the following (switch) trial in the form of “negative priming” of the L 1 lexicon as a whole. © 1999 Academic Press", "title": "" }, { "docid": "2939531a61f319ace08f852f783e8734", "text": "We pose the following question: what happens when test data not only differs from training data, but differs from it in a continually evolving way? The classic domain adaptation paradigm considers the world to be separated into stationary domains with clear boundaries between them. However, in many real-world applications, examples cannot be naturally separated into discrete domains, but arise from a continuously evolving underlying process. Examples include video with gradually changing lighting and spam email with evolving spammer tactics. We formulate a novel problem of adapting to such continuous domains, and present a solution based on smoothly varying embeddings. Recent work has shown the utility of considering discrete visual domains as fixed points embedded in a manifold of lower-dimensional subspaces. Adaptation can be achieved via transforms or kernels learned between such stationary source and target subspaces. We propose a method to consider non-stationary domains, which we refer to as Continuous Manifold Adaptation (CMA). We treat each target sample as potentially being drawn from a different subspace on the domain manifold, and present a novel technique for continuous transform-based adaptation. Our approach can learn to distinguish categories using training data collected at some point in the past, and continue to update its model of the categories for some time into the future, without receiving any additional labels. Experiments on two visual datasets demonstrate the value of our approach for several popular feature representations.", "title": "" }, { "docid": "2c6d36c2c7309da8bb714c50b49caf45", "text": "Although a significant amount of work has be done on the subject of IT governance there still appears to be some disjoint and confusion about what IT governance really is and how it may be realized in practice. This research-in-progress paper draws on existing research related to IT governance to provide a more in-depth understanding of the concept. It describes the current understanding of IT governance and argues for the extension and further development of the various dimensions of governance. An extended model of IT governance is proposed. A research agenda is outlined.", "title": "" }, { "docid": "7abee7a80d26726257d05a2d1a6e398e", "text": "Forty years ago, May proved that sufficiently large or complex ecological networks have a probability of persisting that is close to zero, contrary to previous expectations. May analysed large networks in which species interact at random. However, in natural systems pairs of species have well-defined interactions (for example predator–prey, mutualistic or competitive). Here we extend May’s results to these relationships and find remarkable differences between predator–prey interactions, which are stabilizing, and mutualistic and competitive interactions, which are destabilizing. We provide analytic stability criteria for all cases. We use the criteria to prove that, counterintuitively, the probability of stability for predator–prey networks decreases when a realistic food web structure is imposed or if there is a large preponderance of weak interactions. Similarly, stability is negatively affected by nestedness in bipartite mutualistic networks. These results are found by separating the contribution of network structure and interaction strengths to stability. Stable predator–prey networks can be arbitrarily large and complex, provided that predator–prey pairs are tightly coupled. The stability criteria are widely applicable, because they hold for any system of differential equations.", "title": "" }, { "docid": "227ad7173deb06c2d492bb27ce70f5df", "text": "A public service motivation (PSM) inclines employees to provide effort out of concern for the impact of that effort on a valued social service. Though deemed to be important in the literature on public administration, this motivation has not been formally considered by economists. When a PSM exists, this paper establishes conditions under which government bureaucracy can better obtain PSM motivated effort from employees than a standard profit maximizing firm. The model also provides an efficiency rationale for low-powered incentives in both bureaucracies and other organizations producing social services.  2000 Elsevier Science S.A. All rights reserved.", "title": "" }, { "docid": "d81e35229c0fc0b9c7d498a254a4d6be", "text": "Recent advances in the field of technology have led to the emergence of innovative technological smart solutions providing unprecedented opportunities for application in the tourism and hospitality industry. With intensified competition in the tourism market place, it has become paramount for businesses to explore the potential of technologies, not only to optimize existing processes but facilitate the creation of more meaningful and personalized services and experiences. This study aims to bridge the current knowledge gap between smart technologies and experience personalization to understand how smart mobile technologies can facilitate personalized experiences in the context of the hospitality industry. By adopting a qualitative case study approach, this paper makes a two-fold contribution; it a) identifies the requirements of smart technologies for experience creation, including information aggregation, ubiquitous mobile connectedness and real time synchronization and b) highlights how smart technology integration can lead to two distinct levels of personalized tourism experiences. The paper concludes with the development of a model depicting the dynamic process of experience personalization and a discussion of the strategic implications for tourism and hospitality management and research.", "title": "" }, { "docid": "9097c75f98fcf355ce802f91b7599704", "text": "LetM be an asymptotically flat 3-manifold of nonnegative scalar curvature. The Riemannian Penrose Inequality states that the area of an outermost minimal surface N in M is bounded by the ADM mass m according to the formula |N | ≤ 16πm2. We develop a theory of weak solutions of the inverse mean curvature flow, and employ it to prove this inequality for each connected component of N using Geroch’s monotonicity formula for the ADM mass. Our method also proves positivity of Bartnik’s gravitational capacity by computing a positive lower bound for the mass purely in terms of local geometry. 0. Introduction In this paper we develop the theory of weak solutions for the inverse mean curvature flow of hypersurfaces in a Riemannian manifold, and apply it to prove the Riemannian Penrose Inequality for a connected horizon, to wit: the total mass of an asymptotically flat 3-manifold of nonnegative scalar curvature is bounded below in terms of the area of each smooth, compact, connected, “outermost” minimal surface in the 3-manifold. A minimal surface is called outermost if it is not separated from infinity by any other compact minimal surface. The result was announced in [51]. The first author acknowledges the support of Sonderforschungsbereich 382, Tübingen. The second author acknowledges the support of an NSF Postdoctoral Fellowship, NSF Grants DMS-9626405 and DMS-9708261, a Sloan Foundation Fellowship, and the Max-Planck-Institut for Mathematics in the Sciences, Leipzig. Received May 15, 1998.", "title": "" }, { "docid": "ddf56804605fd0957316979af50f010a", "text": "In this work, we provide an overview of our previously published works on incorporating demand uncertainty in midterm planning of multisite supply chains. A stochastic programming based approach is described to model the planning process as it reacts to demand realizations unfolding over time. In the proposed bilevel-framework, the manufacturing decisions are modeled as ‘here-and-now’ decisions, which are made before demand realization. Subsequently, the logistics decisions are postponed in a ‘waitand-see’ mode to optimize in the face of uncertainty. In addition, the trade-off between customer satisfaction level and production costs is also captured in the model. The proposed model provides an effective tool for evaluating and actively managing the exposure of an enterprises assets (such as inventory levels and profit margins) to market uncertainties. The key features of the proposed framework are highlighted through a supply chain planning case study. # 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "56ea461f00ef3dd9d760f122d405da81", "text": "Neuronal apoptosis sculpts the developing brain and has a potentially important role in neurodegenerative diseases. The principal molecular components of the apoptosis programme in neurons include Apaf-1 (apoptotic protease-activating factor 1) and proteins of the Bcl-2 and caspase families. Neurotrophins regulate neuronal apoptosis through the action of critical protein kinase cascades, such as the phosphoinositide 3-kinase/Akt and mitogen-activated protein kinase pathways. Similar cell-death-signalling pathways might be activated in neurodegenerative diseases by abnormal protein structures, such as amyloid fibrils in Alzheimer's disease. Elucidation of the cell death machinery in neurons promises to provide multiple points of therapeutic intervention in neurodegenerative diseases.", "title": "" }, { "docid": "2e167507f8b44e783d60312c0d71576d", "text": "The goal of this paper is to study different techniques to predict stock price movement using the sentiment analysis from social media, data mining. In this paper we will find efficient method which can predict stock movement more accurately. Social media offers a powerful outlet for people’s thoughts and feelings it is an enormous ever-growing source of texts ranging from everyday observations to involved discussions. This paper contributes to the field of sentiment analysis, which aims to extract emotions and opinions from text. A basic goal is to classify text as expressing either positive or negative emotion. Sentiment classifiers have been built for social media text such as product reviews, blog posts, and even twitter messages. With increasing complexity of text sources and topics, it is time to re-examine the standard sentiment extraction approaches, and possibly to redefine and enrich the definition of sentiment. Next, unlike sentiment analysis research to date, we examine sentiment expression and polarity classification within and across various social media streams by building topical datasets within each stream. Different data mining methods are used to predict market more efficiently along with various hybrid approaches. We conclude that stock prediction is very complex task and various factors should be considered for forecasting the market more accurately and efficiently.", "title": "" }, { "docid": "597f097d5206fc259224b905d4d20e20", "text": "W e present here a QT database designed j b r evaluation of algorithms that detect waveform boundaries in the EGG. T h e dataabase consists of 105 fifteen-minute excerpts of two-channel ECG Holter recordings, chosen to include a broad variety of QRS and ST-T morphologies. Waveform bounda,ries for a subset of beats in, these recordings have been manually determined by expert annotators using a n interactive graphic disp1a.y to view both signals simultaneously and to insert the annotations. Examples of each m,orvhologg were inchded in this subset of uniaotated beats; at least 30 beats in each record, 3622 beats in all, were manually a:anotated in Ihe database. In 11 records, two indepen,dent sets of ennotations have been inchded, to a.llow inter-observer variability slwdies. T h e Q T Databnse is available on a CD-ROM in the format previously used for the MIT-BJH Arrhythmia Database ayad the Euro-pean ST-T Database, from which some of the recordings in the &T Database have been obtained.", "title": "" }, { "docid": "d5bc87dc8c93d2096f048437315e6634", "text": "The diversity of an ensemble can be calculated in a variety of ways. Here a diversity metric and a means for altering the diversity of an ensemble, called “thinning”, are introduced. We experiment with thinning algorithms evaluated on ensembles created by several techniques on 22 publicly available datasets. When compared to other methods, our percentage correct diversity measure algorithm shows a greater correlation between the increase in voted ensemble accuracy and the diversity value. Also, the analysis of different ensemble creation methods indicates each has varying levels of diversity. Finally, the methods proposed for thinning again show that ensembles can be made smaller without loss in accuracy. Information Fusion Journal", "title": "" }, { "docid": "b58c1e18a792974f57e9f676c1495826", "text": "The influence of bilingualism on cognitive test performance in older adults has received limited attention in the neuropsychology literature. The aim of this study was to examine the impact of bilingualism on verbal fluency and repetition tests in older Hispanic bilinguals. Eighty-two right-handed participants (28 men and 54 women) with a mean age of 61.76 years (SD = 9.30; range = 50-84) and a mean educational level of 14.8 years (SD = 3.6; range 2-23) were selected. Forty-five of the participants were English monolinguals, 18 were Spanish monolinguals, and 19 were Spanish-English bilinguals. Verbal fluency was tested by electing a verbal description of a picture and by asking participants to generate words within phonemic and semantic categories. Repetition was tested using a sentence-repetition test. The bilinguals' test scores were compared to English monolinguals' and Spanish monolinguals' test scores. Results demonstrated equal performance of bilingual and monolingual participants in all tests except that of semantic verbal fluency. Bilinguals who learned English before age 12 performed significantly better on the English repetition test and produced a higher number of words in the description of a picture than the bilinguals who learned English after age 12. Variables such as task demands, language interference, linguistic mode, and level of bilingualism are addressed in the Discussion section.", "title": "" } ]
scidocsrr
34a12d7126bb715fb4fe12e02adc969d
Child and adolescent mental disorders: the magnitude of the problem across the globe.
[ { "docid": "86aa313233bee3f040604ffa214af4bf", "text": "It is hypothesized that collective efficacy, defined as social cohesion among neighbors combined with their willingness to intervene on behalf of the common good, is linked to reduced violence. This hypothesis was tested on a 1995 survey of 8782 residents of 343 neighborhoods in Chicago, Illinois. Multilevel analyses showed that a measure of collective efficacy yields a high between-neighborhood reliability and is negatively associated with variations in violence, when individual-level characteristics, measurement error, and prior violence are controlled. Associations of concentrated disadvantage and residential instability with violence are largely mediated by collective efficacy.", "title": "" } ]
[ { "docid": "46639d24ffb635490f75c715236673c4", "text": "-We show that standard multilayer feedfbrward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to LP(lt) performance criteria, for arbitrary finite input environment measures p, provided only that sufficiently many hidden units are available. If the activation function is continuous, bounded and nonconstant, then continuous mappings can be learned uniformly over compact input sets. We also give very general conditions ensuring that networks with sufficiently smooth activation functions are capable of arbitrarily accurate approximation to a_Function and its derivatives. Keywords--Multilayer feedforward networks, Activation function, Universal approximation capabilities, Input environment measure, D'(p) approximation, Uniform approximation, Sobolev spaces, Smooth approximation. 1. I N T R O D U C T I O N The approximation capabilities of neural network architectures have recently been investigated by many authors, including Carroll and Dickinson (1989), Cybenko (1989), Funahashi (1989), Gallant and White (1988), Hecht-Nielsen (1989), Hornik, Stinchcombe, and White (1989, 1990), lrie and Miyake (1988), Lapedes and Farber (1988), Stinchcombe and White (1989, 1990). (This list is by no means complete.) If we think of the network architecture as a rule for computing values at l output units given values at k input units, hence implementing a class of mappings from R k to R ~, we can ask how well arbitrary mappings from R k to R t can be approximated by the network, in particular, if as many hidden units as required for internal representation and computation may be employed. How to measure the accuracy of approximation depends on how we measure closeness between functions, which in turn varies significantly with the specific problem to be dealt with. In many applications, it is necessary to have the network perform simultaneously well on all input samples taken from some compact input set X in R k. In this case, closeness is Requests for reprints should be sent to Kurt Hornik, Institut fur Statistik und Wahrscheinlichkeitstheorie, Technische Universit/~t Wien, Wiedner Hauptstral3e 8-10/107, A-1040 Wien, Austria. measured by the uniform distance between functions on X, that is, P~,.x(f, g) = sup If(x) g(x) i. In other applications, we think of the inputs as random variables and are interested in the average performance where the average is taken with respect to the input environment measure/2, where p(R k) < ~. In this case, closeness is measured by the LP(p) distances Pp.~,(f, g) = [fRk [f(x) -g(x)l\" dl~(X)] 'p, 1 -< p < 0o, the most popular choice being p --2, corresponding to mean square error. Of course, there are many more ways of measuring closeness of functions. In particular, in many applications, it is also necessary that the derivatives of the approximating function implemented by the network closely resemble those of the function to be approximated, up to some order. This issue was first taken up in Hornik et al. (1990), who discuss the sources of need of smooth functional approximation in more detail. Typical examples arise in robotics (learning of smooth movements) and signal processing (analysis of chaotic time series); for a recent application to problems of nonparametric inference in statistics and econometrics, see Gallant and White (1989). All papers establishing certain approximation ca-", "title": "" }, { "docid": "4e0a3dd1401a00ddc9d0620de93f4ecc", "text": "The spatial-numerical association of response codes (SNARC) effect is the tendency for humans to respond faster to relatively larger numbers on the left or right (or with the left or right hand) and faster to relatively smaller numbers on the other side. This effect seems to occur due to a spatial representation of magnitude either in occurrence with a number line (wherein participants respond to relatively larger numbers faster on the right), other representations such as clock faces (responses are reversed from number lines), or culturally specific reading directions, begging the question as to whether the effect may be limited to humans. Given that a SNARC effect has emerged via a quantity judgement task in Western lowland gorillas and orangutans (Gazes et al., Cog 168:312–319, 2017), we examined patterns of response on a quantity discrimination task in American black bears, Western lowland gorillas, and humans for evidence of a SNARC effect. We found limited evidence for SNARC effect in American black bears and Western lowland gorillas. Furthermore, humans were inconsistent in direction and strength of effects, emphasizing the importance of standardizing methodology and analyses when comparing SNARC effects between species. These data reveal the importance of collecting data with humans in analogous procedures when testing nonhumans for effects assumed to bepresent in humans.", "title": "" }, { "docid": "71d81cbddd581a427db52ef811500072", "text": "BACKGROUND\nRelation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences.\n\n\nRESULTS\nOur system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures.\n\n\nCONCLUSIONS\nWe demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.", "title": "" }, { "docid": "22ee38911960fc78d893fe92a6e0a820", "text": "In a knowledge and information society, e-learning has built on the extensive use of advanced information and communication technologies to deliver learning and instruction. In addition, employees who need the training do not have to gather in a place at the same time, and thus it is not necessary for them to travel far away for attending training courses. Furthermore, the flexibility allows employees who perform different jobs or tasks for training courses according to their own scheduling. Since many studies have discussed learning and training of employees and most of them are focused on the learning emotion, learning style, educational content, and technology, there is limited research exploring the relationship between the e-learning and employee’s satisfaction. Therefore, this study aims to explore how to enhance employee’s satisfaction by means of e-learning systems, and what kinds of training or teaching activities are effective to increase their learning satisfaction. We provide a model and framework for assessing the impact of e-learning on employee’s satisfaction which improve learning and teaching outcomes. Findings from the study confirmed the validity of the proposed model for e-learning satisfaction assessment. In addition, the results showed that the four variables technology, educational content, motivation, and attitude significantly influenced employee’s learning satisfaction. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "413b21bece889166a385651ba5cd8512", "text": "Monaural speech separation is a fundamental problem in robust speech processing. Recently, deep neural network (DNN)-based speech separation methods, which predict either clean speech or an ideal time-frequency mask, have demonstrated remarkable performance improvement. However, a single DNN with a given window length does not leverage contextual information sufficiently, and the differences between the two optimization objectives are not well understood. In this paper, we propose a deep ensemble method, named multicontext networks, to address monaural speech separation. The first multicontext network averages the outputs of multiple DNNs whose inputs employ different window lengths. The second multicontext network is a stack of multiple DNNs. Each DNN in a module of the stack takes the concatenation of original acoustic features and expansion of the soft output of the lower module as its input, and predicts the ratio mask of the target speaker; the DNNs in the same module employ different contexts. We have conducted extensive experiments with three speech corpora. The results demonstrate the effectiveness of the proposed method. We have also compared the two optimization objectives systematically and found that predicting the ideal time-frequency mask is more efficient in utilizing clean training speech, while predicting clean speech is less sensitive to SNR variations.", "title": "" }, { "docid": "b8b3c300b0786c7cf5449945edb2e15c", "text": "This paper outlines problems that cellular network operators will face as energy-efficient housing becomes more popular. We report measurement results from houses made of modern construction materials that are required to achieve sufficient level of energy-efficiency, but that impact heavily also on radio signal propagation. Energy-efficiency is especially important in northern countries, where houses need to be properly isolated as heating generates a big share of the total energy consumption of households. However, the energy-efficiency trend will also reach rest of the Europe and other warmer countries as the tightening energy-efficiency requirements concern also cooling the houses. The measurement results indicate severe problems originating from radio signal attenuation as it increases up to 35 dB for individual construction materials for cellular frequencies around 2 GHz. From the perspective of actual building penetration losses in modern, energy-efficient houses, average attenuation values even up to 30 dB have been measured. Additional attenuation is very sensitive to buildings materials, but could jeopardize cellular coverage in the future.", "title": "" }, { "docid": "7c3ea2f5dc309058b95a6070fcb35266", "text": "The olive psyllid, Euphyllura phillyreae Foerster is one of the most destructive pests on buds and flowers of olive tree (Olea europaea L.) in May when the olive growers cannot apply any insecticides against the pest. Temperature-dependent development of the psyllid was studied at constant temperatures ranged 16–26°C. A degree-day (DD) model was developed to predict the larval emergence using the weekly cumulative larval counts and daily mean temperatures. Linear regression analysis estimated a lower developmental threshold of 4.1 and 4.3°C and a thermal constant of 164.17 and 466.13 DD for development of egg and larva, respectively. The cumulative larval counts of E. phillyreae approximated by probit transformation were plotted against time, expressed as the sum of DD above 4.3°C, the starting date when the olive tree phenology was the period of flower cluster initiation. A linear model was used to describe the relationship of DDs and probit values of larval emergence patterns of E. phillyreae and predicted that 10, 50 and 95% emergence of the larvae required 235.81, 360.22 and 519.93 DD, respectively, with errors of 1–3 days compared to observed values. Based on biofix depends the development of olive tree phenology; the DD model can be used as a forecasting method for proper timing of insecticide applications against E. phillyreae larvae in olive groves.", "title": "" }, { "docid": "46200c35a82b11d989c111e8398bd554", "text": "A physics-based compact gallium nitride power semiconductor device model is presented in this work, which is the first of its kind. The model derivation is based on the classical drift-diffusion model of carrier transport, which expresses the channel current as a function of device threshold voltage and externally applied electric fields. The model is implemented in the Saber® circuit simulator using the MAST hardware description language. The model allows the user to extract the parameters from the dc I-V and C-V characteristics that are also available in the device datasheets. A commercial 80 V EPC GaN HEMT is used to demonstrate the dynamic validation of the model against the transient device characteristics in a double-pulse test and a boost converter circuit configuration. The simulated versus measured device characteristics show good agreement and validate the model for power electronics design and applications using the next generation of GaN HEMT devices.", "title": "" }, { "docid": "1c8f1882be318f9d490adbfebd8388f7", "text": "OBJECTIVE\nThis study examined interventions for substance use disorders within the Department of Veterans Affairs (VA) psychiatric and primary care settings.\n\n\nMETHODS\nNational random samples of 83 VA psychiatry program directors and 102 primary care practitioners were surveyed by telephone. The survey assessed screening practices to detect substance use disorders, protocols for treating patients with substance use disorders, and available treatments for substance use disorders.\n\n\nRESULTS\nRespondents reported extensive contact with patients with substance use problems. However, a majority reported being ill equipped to treat substance use disorders themselves; they usually referred such patients to specialty substance use disorder treatment programs.\n\n\nCONCLUSIONS\nOffering fewer specialty substance use disorder services within the VA may be problematic: providers can refer patients to specialty programs only if such programs exist. Caring for veterans with substance use disorders may require increasing the capacity of and establishing new specialty programs or expanding the ability of psychiatric programs and primary care practitioners to provide such care.", "title": "" }, { "docid": "5fc6b0e151762560c8f09d0fe6983ca2", "text": "The increasing popularity of wearable devices that continuously capture video, and the prevalence of third-party applications that utilize these feeds have resulted in a new threat to privacy. In many situations, sensitive objects/regions are maliciously (or accidentally) captured in a video frame by third-party applications. However, current solutions do not allow users to specify and enforce fine grained access control over video feeds.\n In this paper, we describe MarkIt, a computer vision based privacy marker framework, that allows users to specify and enforce fine grained access control over video feeds. We present two example privacy marker systems -- PrivateEye and WaveOff. We conclude with a discussion of the computer vision, privacy and systems challenges in building a comprehensive system for fine grained access control over video feeds.", "title": "" }, { "docid": "73ddacb2ed1eaa8670c777959cda0260", "text": "The current study examined normative beliefs about aggression as a mediator between narcissistic exploitativeness and cyberbullying using two Asian adolescent samples from Singapore and Malaysia. Narcissistic exploitativeness was significantly and positively associated with cyberbullying and normative beliefs about aggression and normative beliefs about aggression were significantly and positively associated with cyberbullying. Normative beliefs about aggression were a significant partial mediator in both samples; these beliefs about aggression served as one possible mechanism of action by which narcissistic exploitativeness could exert its influence on cyberbullying. Findings extended previous empirical research by showing that such beliefs can be the mechanism of action not only in offline but also in online contexts and across cultures. Cyberbullying prevention and intervention efforts should include modification of norms and beliefs supportive of the legitimacy and acceptability of cyberbullying.", "title": "" }, { "docid": "08731e24a7ea5e8829b03d79ef801384", "text": "A new power-rail ESD clamp circuit designed with PMOS as main ESD clamp device has been proposed and verified in a 65nm 1.2V CMOS process. The new proposed design with adjustable holding voltage controlled by the ESD detection circuit has better immunity against mis-trigger or transient-induced latch-on event. The layout area and the standby leakage current of this new proposed design are much superior to that of traditional RC-based power-rail ESD clamp circuit with NMOS as main ESD clamp device.", "title": "" }, { "docid": "af43017e25de9eebc44cb20430c1d9d5", "text": "A coplanar waveguide (CPW) center-fed four-arm slot sinuous antenna is introduced in this letter. The antenna demonstrates broadband characteristics in terms of its split-beam radiation pattern and angle of maximum gain as well as multiband characteristics in terms of axial ratio (AR), omnidirectionality, polarization, and return loss. It is observed experimentally and computationally that regions of low AR with alternating polarization handedness, good omnidirectionality, low return loss, and high antenna gain/efficiency appear in narrow frequency bands. Measured and simulated results are presented to discuss the principles behind the antenna operation and venues for future performance optimization.", "title": "" }, { "docid": "1b347401820c826db444cc3580bde210", "text": "Utilization of Natural Fibers in Plastic Composites: Problems and Opportunities Roger M. Rowell, Anand R, Sanadi, Daniel F. Caulfield and Rodney E. Jacobson Forest Products Laboratory, ESDA, One Gifford Pinchot Drive, Madison, WI 53705 Department of Forestry, 1630 Linden Drive, University of Wisconsin, WI 53706 recycled. Results suggest that agro-based fibers are a viable alternative to inorganic/material based reinforcing fibers in commodity fiber-thermoplastic composite materials as long as the right processing conditions are used and for applications where higher water absorption may be so critical. These renewable fibers hav low densities and high specific properties and their non-abrasive nature permits a high volume of filling in the composite. Kenaf fivers, for example, have excellent specific properties and have potential to be outstanding reinforcing fillers in plastics. In our experiments, several types of natural fibers were blended with polyprolylene(PP) and then injection molded, with the fiber weight fractions varying to 60%. A compatibilizer or a coupling agent was used to improve the interaction and adhesion between the non-polar matrix and the polar lignocellulosic fibers. The specific tensile and flexural moduli of a 50% by weight (39% by volume) of kenaf-PP composites compares favorably with 40% by weight of glass fiber (19% by volume)-PP injection molded composites. Furthermore, prelimimary results sugget that natural fiber-PP composites can be regrounded and", "title": "" }, { "docid": "1436e4fddc73d33a6cf83abfa5c9eb02", "text": "The aim of our study was to provide a contribution to the research field of the critical success factors (CSFs) of ERP projects, with specific focus on smaller enterprises (SMEs). Therefore, we conducted a systematic literature review in order to update the existing reviews of CSFs. On the basis of that review, we led several interviews with ERP consultants experienced with ERP implementations in SMEs. As a result, we showed that all factors found in the literature also affected the success of ERP projects in SMEs. However, within those projects, technological factors gained much more importance compared to the factors that most influence the success of larger ERP projects. For SMEs, factors like the Organizational fit of the ERP system as well as ERP system tests were even more important than Top management support or Project management, which were the most important factors for large-scale companies.", "title": "" }, { "docid": "015db999535361f30ca469bcec85862a", "text": "This study addresses the factors that could affect the intention of physicians to adopt telemedicine technology. Based on the theoretical foundations of technology adoption models, a revised model is proposed and tested via a questionnaire with two groups of physicians that were, at the time of the survey, just about to use telemedicine technology. Group A is composed of physicians from a large urban healthcare provider institution involved in clinical, teaching, and research activities. An Intranet solution for teleradiology and teleconferencing based on ATM technology is being implemented. Physicians received no training regarding this technology. Group B is composed of physicians from rural areas who received training just before using the telemedicine network that links 43 sites in the same area. Results analyzed with PLS indicate that in both cases, physicians' perception of usefulness of telemedicine is positively related to their intention to adopt this technology. This is the only common result between the two groups. Physicians from group A have the intention to adopt telemedicine because its ease of use makes it sound useful. Physicians from group B perceive the ease of use of telemedecine as being associated with its usefulness, which is then related to their intention to adopt telemedicine. Specialists and researchers from group A indicated that their perceived effort and persistence is related to their perceived ease of use, which is linked with perceived usefulness. Both perceived ease of use and usefulness have an impact on their intention to adopt telemedecine, whereas the image they project by using it has no bearing on their intention to adopt telemedicine. Finally their perceived voluntariness of use has a negative and significant impact on their behavioral intention to adopt telemedicine. This result may be explained by the fact that these physicians, due to their heavy load of research and clinical practice, have less time to assess new technologies related to their work and they probably prefer that someone else do the legwork for them.", "title": "" }, { "docid": "2c832dea09e5fc622a5c1bbfdb53f8b2", "text": "A recent meta-analysis (S. Vazire & D. C. Funder, 2006) suggested that narcissism and impulsivity are related and that impulsivity partially accounts for the relation between narcissism and self-defeating behaviors (SDB). This research examines these hypotheses in two studies and tests a competing hypothesis that Extraversion and Agreeableness account for this relation. In Study 1, we examined the relations among narcissism, impulsivity, and aggression. Both narcissism and impulsivity predicted aggression, but impulsivity did not mediate the narcissism-aggression relation. In Study 2, narcissism was related to a measure of SDB and manifested divergent relations with a range of impulsivity traits from three measures. None of the impulsivity models accounted for the narcissism-SDB relation, although there were unique mediating paths for traits related to sensation and fun seeking. The domains of Extraversion and low Agreeableness successfully mediated the entire narcissism-SDB relation. We address the discrepancy between the current and meta-analytic findings.", "title": "" }, { "docid": "7e541819b7efa75750aa4244132bad85", "text": "The Second Edition of this now-classic text provides a current and thorough treatment of queueing systems, queueing networks, continuous and discrete-time Markov chains, and simulation. Thoroughly updated with new content, as well as new problems and worked examples, the text offers readers both the theory and practical guidance needed to conduct performance and reliability evaluations of computer, communication, and manufacturing systems.", "title": "" }, { "docid": "c85a26f1bccf3b28ca6a46c5312040e7", "text": "This paper describes a novel compact design of a planar circularly polarized (CP) tag antenna for use in a ultrahigh frequency (UHF) radio frequency identification (RFID) system. Introducing the meander strip into the right-arm of the square-ring structure enables the measured half-power bandwidth of the proposed CP tag antenna to exceed 100 MHz (860–960 MHz), which includes the entire operating bandwidth of the global UHF RFID system. A 3-dB axial-ratio bandwidth of approximately 36 MHz (902–938 MHz) can be obtained, which is suitable for American (902–928 MHz), European (918–926 MHz), and Taiwanese UHF RFID (922–928 MHz) applications. Since the overall antenna dimensions are only <inline-formula> <tex-math notation=\"LaTeX\">$54\\times54$ </tex-math></inline-formula> mm<sup>2</sup>, the proposed tag antenna can be operated with a size that is 64% smaller than that of the tag antennas attached on the safety glass. With a bidirectional reading pattern, the measured reading distance is about 8.3 m. Favorable tag sensitivity is obtained across the desired frequency band.", "title": "" }, { "docid": "cb1308814af219072bdcb66629149317", "text": "Automatic detection of persuasion is essential for machine interaction on the social web. To facilitate automated persuasion detection, we present a novel microtext corpus derived from hostage negotiation transcripts as well as a detailed manual (codebook) for persuasion annotation. Our corpus, called the NPS Persuasion Corpus, consists of 37 transcripts from four sets of hostage negotiation transcriptions. Each utterance in the corpus is hand annotated for one of nine categories of persuasion based on Cialdini’s model: reciprocity, commitment, consistency, liking, authority, social proof, scarcity, other, and not persuasive. Initial results using three supervised learning algorithms (Naı̈ve Bayes, Maximum Entropy, and Support Vector Machines) combined with gappy and orthogonal sparse bigram feature expansion techniques show that the annotation process did capture machine learnable features of persuasion with F-scores better than baseline.", "title": "" } ]
scidocsrr
920c1c79e4cd853ea3e2703c260b82da
A gradient descent rule for spiking neurons emitting multiple spikes
[ { "docid": "4bce473bb65dfc545d5895c7edb6cea6", "text": "mathematical framework of the population equations. It will turn out that the results are – of course – consistent with those derived from the population equation. We study a homogeneous network of N identical neurons which are mutually coupled with strength wij = J0/N where J0 > 0 is a positive constant. In other words, the (excitatory) interaction is scaled with one over N so that the total input to a neuron i is of order one even if the number of neurons is large (N →∞). Since we are interested in synchrony we suppose that all neurons have fired simultaneously at t̂ = 0. When will the neurons fire again? Since all neurons are identical we expect that the next firing time will also be synchronous. Let us calculate the period T between one synchronous pulse and the next. We start from the firing condition of SRM0 neurons θ = ui(t) = η(t− t̂i) + ∑", "title": "" }, { "docid": "d610f7d468fe2f28637f4aeb95948cd6", "text": "A computational model is described in which the sizes of variables are represented by the explicit times at which action potentials occur, rather than by the more usual 'firing rate' of neurons. The comparison of patterns over sets of analogue variables is done by a network using different delays for different information paths. This mode of computation explains how one scheme of neuroarchitecture can be used for very different sensory modalities and seemingly different computations. The oscillations and anatomy of the mammalian olfactory systems have a simple interpretation in terms of this representation, and relate to processing in the auditory system. Single-electrode recording would not detect such neural computing. Recognition 'units' in this style respond more like radial basis function units than elementary sigmoid units.", "title": "" } ]
[ { "docid": "946e5205a93f71e0cfadf58df186ef7e", "text": "Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.", "title": "" }, { "docid": "20d94a7857c8dfc975312e67ca3cbb4a", "text": "Condor is a distributed batch system for sharing the workload of compute-intensive jobs in a pool of Unix workstations connected by a network. In such a Condor pool, idle machines are spotted by Condor and allocated to queued jobs, thus putting otherwise unutilized capacity to e cient use. When institutions owning Condor pools cooperate, they may wish to exploit the joint capacity of their pools in a similar way. So the need arises to extend the Condor load-sharing and protection mechanisms beyond the boundaries of Condor pools, or in other words, to create a ock of Condors. Such a ock may include Condor pools connected by local-area networks as well as by wide-area networks. In this paper we describe the design and implementation of a distributed, layered Condor ocking mechanism. The main concept in this design is the Gateway Machine that represents in each pool idle machines from other pools in the ock and allows job transfers across pool boundaries. Our ocking design is transparent to the workstation owners, to the users, and to Condor itself. We also discuss our experiences with an intercontinental Condor ock.", "title": "" }, { "docid": "fca66085984bf1fe513080e70c3fafc2", "text": "In this letter, a two-layer metasurface is proposed to achieve radar cross-section (RCS) reduction of a stacked patch antenna at a broadband. The lower layer metasurface is composed of four square patches loaded with four resistors, which is utilized to reduce RCS in the operation band (2.75-3.4 GHz) of the patch antenna. The periodic square loops with four resistors mounted on each side are adopted to construct the upper layer metasurface for absorbing the incoming wave out of band. We first investigate the effectiveness of the proposed metasurface on the RCS reduction of the single stacked patch and then apply this strategy to the 1 ×4 stacked patch array. The proposed low RCS stacked patch array antenna is fabricated and measured. The experimental results show that the designed metasurface makes the antenna RCS dramatically reduced in a broadband covering the operation band and out-of-band from 5.5-16 GHz. Moreover, the introduction of metasurface is demonstrated to have little influence on the antenna performance.", "title": "" }, { "docid": "b8def7be21f014693589ae99385412dd", "text": "Automatic image captioning has received increasing attention in recent years. Although there are many English datasets developed for this problem, there is only one Turkish dataset and it is very small compared to its English counterparts. Creating a new dataset for image captioning is a very costly and time consuming task. This work is a first step towards transferring the available, large English datasets into Turkish. We translated English captioning datasets into Turkish by using an automated translation tool and we trained an image captioning model on the automatically obtained Turkish captions. Our experiments show that this model yields the best performance so far on Turkish captioning.", "title": "" }, { "docid": "c56d09b3c08f2cb9cc94ace3733b1c54", "text": "In this paper, we describe our microblog realtime filtering system developed and submitted for the Text Retrieval Conference (TREC 2015) microblog track. We submitted six runs for two tasks related to real-time filtering by using various Information Retrieval (IR), and Machine Learning (ML) techniques to analyze the Twitter sample live stream and match relevant tweets corresponding to specific user interest profiles. Evaluation results demonstrate the effectiveness of our approach as we achieved 3 of the top 7 best scores among automatic submissions across all participants and obtained the best (or close to best) scores in more than 25% of the evaluated topics for the real-time mobile push notification task.", "title": "" }, { "docid": "814e593fac017e5605c4992ef7b25d6d", "text": "This paper discusses the design of high power density transformer and inductor for the high frequency dual active bridge (DAB) GaN charger. Because the charger operates at 500 kHz, the inductance needed to achieve ZVS for the DAB converter is reduced to as low as 3μH. As a result, it is possible to utilize the leakage inductor as the series inductor of DAB converter. To create such amount of leakage inductance, certain space between primary and secondary winding is allocated to store the leakage flux energy. The designed transformer is above 99.2% efficiency while delivering 3.3kW. The power density of the designed transformer is 6.3 times of the lumped transformer and inductor in 50 kHz Si Charger. The detailed design procedure and loss analysis are discussed.", "title": "" }, { "docid": "1bc4409c0bad53d5286e9d3ea7017484", "text": "Due to the explosive growth in the size of scientific data sets, data-intensive computing is an emerging trend in computational science. Many application scientists are looking to integrate data-intensive computing into computational-intensive High Performance Computing facilities, particularly for data analytics. We have observed several scientific applications which must migrate their data from an HPC storage system to a data-intensive one. There is a gap between the data semantics of HPC storage and data-intensive system, hence, once migrated, the data must be further refined and reorganized. This reorganization requires at least two complete scans through the data set and then at least one MapReduce program to prepare the data before analyzing it. Running multiple MapReduce phases causes significant overhead for the application, in the form of excessive I/O operations. For every MapReduce application that must be run in order to complete the desired data analysis, a distributed read and write operation on the file system must be performed. Our contribution is to extend Map-Reduce to eliminate the multiple scans and also reduce the number of pre-processing MapReduce programs. We have added additional expressiveness to the MapReduce language to allow users to specify the logical semantics of their data such that 1) the data can be analyzed without running multiple data pre-processing MapReduce programs, and 2) the data can be simultaneously reorganized as it is migrated to the data-intensive file system. Using our augmented MapReduce system, MapReduce with Access Patterns (MRAP), we have demonstrated up to 33% throughput improvement in one real application, and up to 70% in an I/O kernel of another application.", "title": "" }, { "docid": "50fdc7454c5590cfc4bf151a3637a99c", "text": "Named Entity Recognition (NER) is the task of locating and classifying names in text. In previous work, NER was limited to a small number of predefined entity classes (e.g., people, locations, and organizations). However, NER on the Web is a far more challenging problem. Complex names (e.g., film or book titles) can be very difficult to pick out precisely from text. Further, the Web contains a wide variety of entity classes, which are not known in advance. Thus, hand-tagging examples of each entity class is impractical. This paper investigates a novel approach to the first step in Web NER: locating complex named entities in Web text. Our key observation is that named entities can be viewed as a species of multiword units, which can be detected by accumulating n-gram statistics over the Web corpus. We show that this statistical method’s F1 score is 50% higher than that of supervised techniques including Conditional Random Fields (CRFs) and Conditional Markov Models (CMMs) when applied to complex names. The method also outperforms CMMs and CRFs by 117% on entity classes absent from the training data. Finally, our method outperforms a semi-supervised CRF by 73%.", "title": "" }, { "docid": "6910ce37b65995c610e8534158f3a056", "text": "Conductor transposition, for cross-bonded cables, is recommended in the ANSI/IEEE Standard 575-1988 as a means to reduce interference with communication systems. In this paper, it is shown that reduced interference comes at the price of increased cable losses, which is an issue not addressed in the 1998 edition of the IEEE Standard 575. Analytical formulas are obtained for the calculation of the positive-sequence resistance for transposed and not transposed conductors to shed light on the reasons why the losses increase when the conductors are transposed. It has been found that for cross-bonded cables installed in flat formations, the positive-sequence resistance of transposed conductors is always larger than that of nontransposed conductors. Parametric studies are performed by varying all of the construction and installation parameters that affect the value of the positive-sequence resistance. In particular, we have changed the separation distance between cables, the insulation thickness, the number and resistance of the concentric wires, and the resistivity of the soil among other parameters. Examples on transmission and distribution cables are discussed.", "title": "" }, { "docid": "006245f5c4c3ccaaa7d5ac86591a954e", "text": "OBJECTIVE\nAs myofascial release therapy is currently under development, the objective of this study was to compare the effectiveness of myofascial release therapy with manual therapy for treating occupational mechanical neck pain.\n\n\nDESIGN\nA randomized, single-blind parallel group study was developed. The sample (n = 59) was divided into GI, treated with manual therapy, and GII, treated with myofascial release therapy. Variables studied were intensity of neck pain, cervical disability, quality of life, craniovertebral angle, and ranges of cervical motion.\n\n\nRESULTS\nAt five sessions, clinical significance was observed in both groups for all the variables studied, except for flexion in GI. At this time point, an intergroup statistical difference was observed, which showed that GII had better craniovertebral angle (P = 0.014), flexion (P = 0.021), extension (P = 0.003), right side bending (P = 0.001), and right rotation (P = 0.031). A comparative analysis between therapies after intervention showed statistical differences indicating that GII had better craniovertebral angle (P = 0.000), right (P = 0.000) and left (P = 0.009) side bending, right (P = 0.024) and left (P = 0.046) rotations, and quality of life.\n\n\nCONCLUSIONS\nThe treatment of occupational mechanical neck pain by myofascial release therapy seems to be more effective than manual therapy for correcting the advanced position of the head, recovering range of motion in side bending and rotation, and improving quality of life.", "title": "" }, { "docid": "70c6da9da15ad40b4f64386b890ccf51", "text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.", "title": "" }, { "docid": "e0fbfac63b894c46e3acda86adb67053", "text": "OBJECTIVE\nTo investigate the effectiveness of acupuncture compared with minimal acupuncture and with no acupuncture in patients with tension-type headache.\n\n\nDESIGN\nThree armed randomised controlled multicentre trial.\n\n\nSETTING\n28 outpatient centres in Germany.\n\n\nPARTICIPANTS\n270 patients (74% women, mean age 43 (SD 13) years) with episodic or chronic tension-type headache.\n\n\nINTERVENTIONS\nAcupuncture, minimal acupuncture (superficial needling at non-acupuncture points), or waiting list control. Acupuncture and minimal acupuncture were administered by specialised physicians and consisted of 12 sessions per patient over eight weeks.\n\n\nMAIN OUTCOME MEASURE\nDifference in numbers of days with headache between the four weeks before randomisation and weeks 9-12 after randomisation, as recorded by participants in headache diaries.\n\n\nRESULTS\nThe number of days with headache decreased by 7.2 (SD 6.5) days in the acupuncture group compared with 6.6 (SD 6.0) days in the minimal acupuncture group and 1.5 (SD 3.7) days in the waiting list group (difference: acupuncture v minimal acupuncture, 0.6 days, 95% confidence interval -1.5 to 2.6 days, P = 0.58; acupuncture v waiting list, 5.7 days, 3.9 to 7.5 days, P < 0.001). The proportion of responders (at least 50% reduction in days with headache) was 46% in the acupuncture group, 35% in the minimal acupuncture group, and 4% in the waiting list group.\n\n\nCONCLUSIONS\nThe acupuncture intervention investigated in this trial was more effective than no treatment but not significantly more effective than minimal acupuncture for the treatment of tension-type headache.\n\n\nTRIAL REGISTRATION NUMBER\nISRCTN9737659.", "title": "" }, { "docid": "20c3a77fd8b9c9ffc722193c4bca2f2a", "text": "The coating quality of a batch of lab-scale, sustained-release coated tablets was analysed by terahertz pulsed imaging (TPI). Terahertz radiation (2 to 120 cm(-1)) is particularly interesting for coating analysis as it has the capability to penetrate through most pharmaceutical excipients, and hence allows non-destructive coating analysis. Terahertz pulsed spectroscopy (TPS) was employed for the determination of the terahertz refractive indices (RI) on the respective sustained-release excipients used in this study. The whole surface of ten tablets with 10 mg/cm(2) coating was imaged using the fully-automated TPI imaga2000 system. Multidimensional coating thickness or signal intensity maps were reconstructed for the analysis of coating layer thickness, reproducibility, and uniformity. The results from the TPI measurements were validated with optical microscopy imaging and were found to be in good agreement with this destructive analytical technique. The coating thickness around the central band was generally 33% thinner than that on the tablet surfaces. Bimodal coating thickness distribution was detected in some tablets, with thicker coatings around the edges relative to the centre. Aspects of coating defects along with their site, depth and size were identified with virtual terahertz cross-sections. The inter-day precision of the TPI measurement was found to be within 0.5%.", "title": "" }, { "docid": "2dd695094324056b5d62b5b98da9568a", "text": "A case of trichilemmoma in continuity with a pigmented basal cell carcinoma is presented with dermatoscopy and dermatopathology. The distinction between the two lesions was evident dermatoscopically and was confirmed dermatopathologically. While trichilemmoma has been reported in association with basal cell carcinoma and dermatoscopy images of four previous cases of trichilemmoma have been published, no previous dermatoscopy image has been published of trichilemmoma associated with basal cell carcinoma.", "title": "" }, { "docid": "fe089ebac93a72c487389a3c732a56e2", "text": "Nowadays, most of information saved in companies are as unstructured models. Retrieval and extraction of the information is essential works and importance in semantic web areas. Many of these requirements will be depend on the storage efficiency and unstructured data analysis. Merrill Lynch recently estimated that more than 80% of all potentially useful business information is unstructured data. The large number and complexity of unstructured data opens up many new possibilities for the analyst. We analyze both structured and unstructured data individually and collectively. Text mining and natural language processing are two techniques with their methods for knowledge discovery form textual context in documents. In this study, text mining and natural language techniques will be illustrated. The aim of this work comparison and evaluation the similarities and differences between text mining and natural language processing for extraction useful information via suitable themselves methods.", "title": "" }, { "docid": "29fed3b1c65f8555da6ef49c7bc27634", "text": "In the context of fine-grained visual categorization, the ability to interpret models as human-understandable visual manuals is sometimes as important as achieving high classification accuracy. In this paper, we propose a novel Part-Stacked CNN architecture that explicitly explains the finegrained recognition process by modeling subtle differences from object parts. Based on manually-labeled strong part annotations, the proposed architecture consists of a fully convolutional network to locate multiple object parts and a two-stream classification network that encodes object-level and part-level cues simultaneously. By adopting a set of sharing strategies between the computation of multiple object parts, the proposed architecture is very efficient running at 20 frames/sec during inference. Experimental results on the CUB-200-2011 dataset reveal the effectiveness of the proposed architecture, from multiple perspectives of classification accuracy, model interpretability, and efficiency. Being able to provide interpretable recognition results in realtime, the proposed method is believed to be effective in practical applications.", "title": "" }, { "docid": "9cbb3369c6276e74c60d2f5c01aa9778", "text": "This paper presents some of the ground mobile robots under development at the Robotics and Mechanisms Laboratory (RoMeLa) at Virginia Tech that use biologically inspired novel locomotion strategies. By studying nature's models and then imitating or taking inspiration from these designs and processes, we apply and implement new ways for mobile robots to move. Unlike most ground mobile robots that use conventional means of locomotion such as wheels or tracks, these robots display unique mobility characteristics that make them suitable for certain environments where conventional ground robots have difficulty moving. These novel ground robots include; the whole skin locomotion robot inspired by amoeboid motility mechanisms, the three-legged walking machine STriDER (Self-excited Tripedal Dynamic Experimental Robot) that utilizes the concept of actuated passive-dynamic locomotion, the hexapod robot MARS (Multi Appendage Robotic System) that uses dry-adhesive “gecko feet” for walking in zero-gravity environments, the humanoid robot DARwIn (Dynamic Anthropomorphic Robot with Intelligence) that uses dynamic bipedal gaits, and the high mobility robot IMPASS (Intelligent Mobility Platform with Active Spoke System) that uses a novel wheel-leg hybrid locomotion strategy. Each robot and the novel locomotion strategies it uses are described, followed by a discussion of their capabilities and challenges.", "title": "" }, { "docid": "8f7368daec71ccb4b5c5a2daebda07be", "text": "This paper presents a novel inkjet-printed humidity sensor tag for passive radio-frequency identification (RFID) systems operating at ultrahigh frequencies (UHFs). During recent years, various humidity sensors have been developed by researchers around the world for HF and UHF RFID systems. However, to our best knowledge, the humidity sensor presented in this paper is one of the first passive UHF RFID humidity sensor tags fabricated using inkjet technology. This paper describes the structure and operation principle of the sensor tag as well as discusses the method of performing humidity measurements in practice. Furthermore, measurement results are presented, which include air humidity-sensitivity characterization and tag identification performance measurements.", "title": "" }, { "docid": "bd47b468b1754ddd9fecf8620eb0b037", "text": "Common bean (Phaseolus vulgaris) is grown throughout the world and comprises roughly 50% of the grain legumes consumed worldwide. Despite this, genetic resources for common beans have been lacking. Next generation sequencing, has facilitated our investigation of the gene expression profiles associated with biologically important traits in common bean. An increased understanding of gene expression in common bean will improve our understanding of gene expression patterns in other legume species. Combining recently developed genomic resources for Phaseolus vulgaris, including predicted gene calls, with RNA-Seq technology, we measured the gene expression patterns from 24 samples collected from seven tissues at developmentally important stages and from three nitrogen treatments. Gene expression patterns throughout the plant were analyzed to better understand changes due to nodulation, seed development, and nitrogen utilization. We have identified 11,010 genes differentially expressed with a fold change ≥ 2 and a P-value < 0.05 between different tissues at the same time point, 15,752 genes differentially expressed within a tissue due to changes in development, and 2,315 genes expressed only in a single tissue. These analyses identified 2,970 genes with expression patterns that appear to be directly dependent on the source of available nitrogen. Finally, we have assembled this data in a publicly available database, The Phaseolus vulgaris Gene Expression Atlas (Pv GEA), http://plantgrn.noble.org/PvGEA/ . Using the website, researchers can query gene expression profiles of their gene of interest, search for genes expressed in different tissues, or download the dataset in a tabular form. These data provide the basis for a gene expression atlas, which will facilitate functional genomic studies in common bean. Analysis of this dataset has identified genes important in regulating seed composition and has increased our understanding of nodulation and impact of the nitrogen source on assimilation and distribution throughout the plant.", "title": "" }, { "docid": "a5b1a00dab7c589cd9220986613eff98", "text": "This study seeks to investigate the impact of capital structure on firm performance by analyzing the relationship between operating performance of Malaysian firms, measured by return on asset (ROA) and return on equity (ROE) with short-term debt (STD), long-term debt (LTD) and total debt (TD). Four variables found by most literature to have an influence on firm operating performance, namely, size, asset grow, sales grow and efficiency, are used as control variables. This study covers two major sectors in Malaysian equity market which are the consumers and industrials sectors. 58 firms were identified as the sample firms and financial data from the year 2005 through 2010 are used as observations for this study, resulting in a total numbers of observations of 358. A series of regression analysis were executed for each model. Lag values for the proxies were also used to replace the non lag values in order to ensure that any extended effect of capital structure on firm performance is also examined. The study finds that only STD and TD have significant relationship with ROA while ROE has significant on each of debt level. However, the analysis with lagged values shows that non of lagged values for STD, TD and LTD has significant relationship with performance.", "title": "" } ]
scidocsrr
1ba1e5e11e49e4ca6388e10eed74ff1b
IREX I :: performance of iris recognition algorithms on standard images
[ { "docid": "978dd8a7f33df74d4a5cea149be6ebb0", "text": "A tutorial on the design and development of automatic speakerrecognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or toverify a person’s claimed identity. Speech processing and the basic components of automatic speakerrecognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9% correct identification. Last, the performances of various systems are compared.", "title": "" } ]
[ { "docid": "285fd0cdd988df78ac172640509b2cd3", "text": "Self-assembly in swarm robotics is essential for a group of robots in achieving a common goal that is not possible to achieve by a single robot. Self-assembly also provides several advantages to swarm robotics. Some of these include versatility, scalability, re-configurability, cost-effectiveness, extended reliability, and capability for emergent phenomena. This work investigates the effect of self-assembly in evolutionary swarm robotics. Because of the lack of research literature within this paradigm, there are few comparisons of the different implementations of self-assembly mechanisms. This paper reports the influence of connection port configuration on evolutionary self-assembling swarm robots. The port configuration consists of the number and the relative positioning of the connection ports on each of the robot. Experimental results suggest that configuration of the connection ports can significantly impact the emergence of selfassembly in evolutionary swarm robotics.", "title": "" }, { "docid": "f6e8eda4fa898a24f3a7d1116e49f42c", "text": "This is the eBook of the printed book and may not include any media, website access codes, or print supplements that may come packaged with the bound book. Search Engines: Information Retrieval in Practice is ideal for introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. It is also a valuable tool for search engine and information retrieval professionals. В Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice , is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines.В Coverage of the underlying IR and mathematical models reinforce key concepts. The bookвЂTMs numerous programming exercises make extensive use of Galago, a Java-based open source search engine.", "title": "" }, { "docid": "a854ee8cf82c4bd107e93ed0e70ee543", "text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.", "title": "" }, { "docid": "327042fae16e69b15a4e8ea857ccdb18", "text": "Do countries with lower policy-induced barriers to international trade grow faster, once other relevant country characteristics are controlled for? There exists a large empirical literature providing an affirmative answer to this question. We argue that methodological problems with the empirical strategies employed in this literature leave the results open to diverse interpretations. In many cases, the indicators of \"openness\" used by researchers are poor measures of trade barriers or are highly correlated with other sources of bad economic performance. In other cases, the methods used to ascertain the link between trade policy and growth have serious shortcomings. Papers that we review include Dollar (1992), Ben-David (1993), Sachs and Warner (1995), and Edwards (1998). We find little evidence that open trade policies--in the sense of lower tariff and non-tariff barriers to trade--are significantly associated with economic growth. Francisco Rodríguez Dani R odrik Department of Economics John F. Kennedy School of Government University of Maryland Harvard University College Park, MD 20742 79 Kennedy Street Cambridge, MA 02138 Phone: (301) 405-3480 Phone: (617) 495-9454 Fax: (301) 405-3542 Fax: (617) 496-5747 TRADE POLICY AND ECONOMIC GROWTH: A SKEPTIC'S GUIDE TO THE CROSS-NATIONAL EVIDENCE \"It isn't what we don't know that kills us. It's what we know that ain't so.\" -Mark Twain", "title": "" }, { "docid": "8e117986ccaed290d5e567d1963ab3f7", "text": "Pedestrian detection from images is an important and yet challenging task. The conventional methods usually identify human figures using image features inside the local regions. In this paper we present that, besides the local features, context cues in the neighborhood provide important constraints that are not yet well utilized. We propose a framework to incorporate the context constraints for detection. First, we combine the local window with neighborhood windows to construct a multi-scale image context descriptor, designed to represent the contextual cues in spatial, scaling, and color spaces. Second, we develop an iterative classification algorithm called contextual boost. At each iteration, the classifier responses from the previous iteration across the neighborhood and multiple image scales, called classification context, are incorporated as additional features to learn a new classifier. The number of iterations is determined in the training process when the error rate converges. Since the classification context incorporates contextual cues from the neighborhood, through iterations it implicitly propagates to greater areas and thus provides more global constraints. We evaluate our method on the Caltech benchmark dataset [11]. The results confirm the advantages of the proposed framework. Compared with state of the arts, our method reduces the miss rate from 29% by [30] to 25% at 1 false positive per image (FPPI).", "title": "" }, { "docid": "d1ba8ad56a6227f771f9cef8139e9f15", "text": "We study sentiment analysis beyond the typical granularity of polarity and instead use Plutchik’s wheel of emotions model. We introduce RBEM-Emo as an extension to the Rule-Based Emission Model algorithm to deduce such emotions from human-written messages. We evaluate our approach on two different datasets and compare its performance with the current state-of-the-art techniques for emotion detection, including a recursive autoencoder. The results of the experimental study suggest that RBEM-Emo is a promising approach advancing the current state-of-the-art in emotion detection.", "title": "" }, { "docid": "4a911d76e556f48ffd2c1894ff82a8fc", "text": "Five-bar planar parallel robots for pick and place operations are always designed so that their singularity loci are significantly reduced. In these robots, the length of the prox-imal links is different from the length of the distal links. As a consequence, the workspace of the robot is significantly limited, since there are holes in it. In contrast, we propose a design in which all four links have equal lengths. Since such a design leads to more parallel singularities, a strategy for avoiding them by switching working modes is proposed. As a result, the usable workspace of the robot is significantly increased. The idea has been implemented on an industrial-grade prototype and the latter is described in detail.", "title": "" }, { "docid": "c84032da31c20d7561ee3f89a5074a5b", "text": "We develop a new type of statistical texture image feature, called a Local Radius Index (LRI), which can be used to quantify texture similarity based on human perception. Image similarity metrics based on LRI can be applied to image compression, identical texture retrieval and other related applications. LRI extracts texture features by using simple pixel value comparisons in space domain. Better performance can be achieved when LRI is combined with complementary texture features, e.g., Local Binary Patterns (LBP) and the proposed Subband Contrast Distribution. Compared with Structural Texture Similarity Metrics (STSIM), the LRI-based metrics achieve better retrieval performance with much less computation. Applied to the recently developed structurally lossless image coder, Matched Texture Coding, LRI enables similar performance while significantly accelerating the encoding.", "title": "" }, { "docid": "d51f2c1b31d1cfb8456190745ff294f7", "text": "This paper presents the design and measured performance of a novel intermediate-frequency variable-gain amplifier for Wideband Code-Division Multiple Access (WCDMA) transmitters. A compensation technique for parasitic coupling is proposed which allows a high dynamic range of 77 dB to be attained at 400 MHz while using a single variable-gain stage. Temperature compensation and decibel-linear characteristic are achieved by means of a control circuit which provides a lower than /spl plusmn/1.5 dB gain error over full temperature and gain ranges. The device is fabricated in a 0.8-/spl mu/m 46 GHz f/sub T/ silicon bipolar technology and drains up to 6 mA from a 2.7-V power supply.", "title": "" }, { "docid": "94848d407b2c4b709210c35d316eff9d", "text": "This paper presents a novel large-scale dataset and comprehensive baselines for end-to-end pedestrian detection and person recognition in raw video frames. Our baselines address three issues: the performance of various combinations of detectors and recognizers, mechanisms for pedestrian detection to help improve overall re-identification (re-ID) accuracy and assessing the effectiveness of different detectors for re-ID. We make three distinct contributions. First, a new dataset, PRW, is introduced to evaluate Person Re-identification in the Wild, using videos acquired through six synchronized cameras. It contains 932 identities and 11,816 frames in which pedestrians are annotated with their bounding box positions and identities. Extensive benchmarking results are presented on this dataset. Second, we show that pedestrian detection aids re-ID through two simple yet effective improvements: a cascaded fine-tuning strategy that trains a detection model first and then the classification model, and a Confidence Weighted Similarity (CWS) metric that incorporates detection scores into similarity measurement. Third, we derive insights in evaluating detector performance for the particular scenario of accurate person re-ID.", "title": "" }, { "docid": "4be087f37232aefa30da1da34a5e9ff5", "text": "Many clinical studies have shown that electroencephalograms (EEG) of Alzheimer patients (AD) often have an abnormal power spectrum. In this paper a frequency band analysis of AD EEG signals is presented, with the aim of improving the diagnosis of AD from EEG signals. Relative power in different EEG frequency bands is used as features to distinguish between AD patients and healthy control subjects. Many different frequency bands between 4 and 30Hz are systematically tested, besides the traditional frequency bands, e.g., theta band (4–8Hz). The discriminative power of the resulting spectral features is assessed through statistical tests (Mann-Whitney U test). Moreover, linear discriminant analysis is conducted with those spectral features. The optimized frequency ranges (4–7Hz, 8–15Hz, 19–24Hz) yield substantially better classification performance than the traditional frequency bands (4–8Hz, 8–12Hz, 12–30Hz); the frequency band 4–7Hz is the optimal frequency range for detecting AD, which is similar to the classical theta band. The frequency bands were also optimized as features through leave-one-out crossvalidation, resulting in error-free classification. The optimized frequency bands may improve existing EEG based diagnostic tools for AD. Additional testing on larger AD datasets is required to verify the effectiveness of the proposed approach.", "title": "" }, { "docid": "939cd6055f850b8fdb6ba869d375cf25", "text": "...although PPP lessons are often supplemented with skills lessons, most students taught mainly through conventional approaches such as PPP leave school unable to communicate effectively in English (Stern, 1983). This situation has prompted many ELT professionals to take note of... second language acquisition (SLA) studies... and turn towards holistic approaches where meaning is central and where opportunities for language use abound. Task-based learning is one such approach...", "title": "" }, { "docid": "daf26ad60817a0d54a7a56895c2895f1", "text": "A technique called optical coherence tomography (OCT) has been developed for noninvasive cross-sectional imaging in biological systems. OCT uses low-coherence interferometry to produce a two-dimensional image of optical scattering from internal tissue microstructures in a way that is analogous to ultrasonic pulse-echo imaging. OCT has longitudinal and lateral spatial resolutions of a few micrometers and can detect reflected signals as small as approximately 10(-10) of the incident optical power. Tomographic imaging is demonstrated in vitro in the peripapillary area of the retina and in the coronary artery, two clinically relevant examples that are representative of transparent and turbid media, respectively.", "title": "" }, { "docid": "1a62ecc611c2e59659a1d06bbcaae30c", "text": "In this paper, we develop a remotely controlled robotic arm with 4 degree of freedom (D.O.F) that is wirelessly controlled using four control mechanisms, i.e, Voice Control, Smart Phone-Tilt Control, Remote control and Hand Gesture Control. Wireless technologies such as Bluetooth and Wi-Fi are used to access the Quad-Controlled Robotic Arm (QCRA). A prototype QCRA is developed. The QCRA can be used to pick and place objects from one place to another on receiving the commands from distance, thereby reducing the human effort. An Android application is developed for convenience of the user in operating this QCRA, using different control mechanisms. Performance evaluation results are encouraging. Potential applications of the QCRA in homes, industries and for physically challenged/aged people are also discussed.", "title": "" }, { "docid": "1b5c1cbe3f53c1f3a50557ff3144887e", "text": "The emergence of antibiotic resistant Staphylococcus aureus presents a worldwide problem that requires non-antibiotic strategies. This study investigated the anti-biofilm and anti-hemolytic activities of four red wines and two white wines against three S. aureus strains. All red wines at 0.5-2% significantly inhibited S. aureus biofilm formation and hemolysis by S. aureus, whereas the two white wines had no effect. Furthermore, at these concentrations, red wines did not affect bacterial growth. Analyses of hemolysis and active component identification in red wines revealed that the anti-biofilm compounds and anti-hemolytic compounds largely responsible were tannic acid, trans-resveratrol, and several flavonoids. In addition, red wines attenuated S. aureus virulence in vivo in the nematode Caenorhabditis elegans, which is killed by S. aureus. These findings show that red wines and their compounds warrant further attention in antivirulence strategies against persistent S. aureus infection.", "title": "" }, { "docid": "8fa34eb8d0ab6b1248a98936ddad7c5c", "text": "Planning with temporally extended goals and uncontrollable events has recently been introduced as a formal model for system reconfiguration problems. An important application is to automatically reconfigure a real-life system in such a way that its subsequent internal evolution is consistent with a temporal goal formula. In this paper we introduce an incremental search algorithm and a search-guidance heuristic, two generic planning enhancements. An initial problem is decomposed into a series of subproblems, providing two main ways of speeding up a search. Firstly, a subproblem focuses on a part of the initial goal. Secondly, a notion of action relevance allows to explore with higher priority actions that are heuristically considered to be more relevant to the subproblem at hand. Even though our techniques are more generally applicable, we restrict our attention to planning with temporally extended goals and uncontrollable events. Our ideas are implemented on top of a successful previous system that performs online learning to better guide planning and to safely avoid potentially expensive searches. In experiments, the system speed performance is further improved by a convincing margin.", "title": "" }, { "docid": "c994489b3a3d2011042903094d70480a", "text": "In this paper, a novel design of broadband coaxial to substrate integrated waveguide (SIW) transition has been implemented. The difficulties to get impedance matching between SIW and excitation port especially for thin substrate has been investigated and design steps to overcome the problem has been discussed. A two port SIW section using back-to-back configuration of the proposed transition is designed to operate at X-band (8-12 GHz) and fabricated. The measured result shows a broad bandwidth (30%) and an insertion loss of 1.2 dB in the operating band.", "title": "" }, { "docid": "3af28edbed06ef6db9fdb27a73e784de", "text": "The study aimed to investigate factors influencing older adults' physical activity engagement over time. The authors analyzed 3 waves of data from a sample of Israelis age 75-94 (Wave 1 n = 1,369, Wave 2 n = 687, Wave 3 n = 154). Findings indicated that physical activity engagement declined longitudinally. Logistic regressions showed that female gender, older age, and taking more medications were significant risk factors for stopping exercise at Wave 2 in those physically active at Wave 1. In addition, higher functional and cognitive status predicted initiating exercise at Wave 2 in those who did not exercise at Wave 1. By clarifying the influence of personal characteristics on physical activity engagement in the Israeli old-old, this study sets the stage for future investigation and intervention, stressing the importance of targeting at-risk populations, accommodating risk factors, and addressing both the initiation and the maintenance of exercise in the face of barriers.", "title": "" }, { "docid": "bf38cd8e1d9abd893979c4b1887519ba", "text": "In this work, an efficient automated new approach for sleep stage identification based on the new standard of the American academy of sleep medicine (AASM) is presented. The propose approach employs time-frequency analysis and entropy measures for feature extraction from a single electroencephalograph (EEG) channel. Three time-frequency techniques were deployed for the analysis of the EEG signal: Choi-Williams distribution (CWD), continuous wavelet transform (CWT), and Hilbert-Huang Transform (HHT). Polysomnographic recordings from sixteen subjects were used in this study and features were extracted from the time-frequency representation of the EEG signal using Renyi's entropy. The classification of the extracted features was done using random forest classifier. The performance of the new approach was tested by evaluating the accuracy and the kappa coefficient for the three time-frequency distributions: CWD, CWT, and HHT. The CWT time-frequency distribution outperformed the other two distributions and showed excellent performance with an accuracy of 0.83 and a kappa coefficient of 0.76.", "title": "" }, { "docid": "dc269ab8dccad6c8be533508d1b73de2", "text": "In this paper the main problems and the available solutions are addressed for the generation of 3D models from terrestrial images. Close range photogrammetry has dealt for many years with manual or automatic image measurements for precise 3D modelling. Nowadays 3D scanners are also becoming a standard source for input data in many application areas, but image-based modelling still remains the most complete, economical, portable, flexible and widely used approach. In this paper the full pipeline is presented for 3D modelling from terrestrial image data, considering the different approaches and analysing all the steps involved.", "title": "" } ]
scidocsrr
8250255eaa5ea3994f34eba9fd215263
Neural Compositional Denotational Semantics for Question Answering
[ { "docid": "74e40c5cb4e980149906495da850d376", "text": "Universal schema predicts the types of entities and relations in a knowledge base (KB) by jointly embedding the union of all available schema types—not only types from multiple structured databases (such as Freebase or Wikipedia infoboxes), but also types expressed as textual patterns from raw text. This prediction is typically modeled as a matrix completion problem, with one type per column, and either one or two entities per row (in the case of entity types or binary relation types, respectively). Factorizing this sparsely observed matrix yields a learned vector embedding for each row and each column. In this paper we explore the problem of making predictions for entities or entity-pairs unseen at training time (and hence without a pre-learned row embedding). We propose an approach having no per-row parameters at all; rather we produce a row vector on the fly using a learned aggregation function of the vectors of the observed columns for that row. We experiment with various aggregation functions, including neural network attention models. Our approach can be understood as a natural language database, in that questions about KB entities are answered by attending to textual or database evidence. In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.", "title": "" }, { "docid": "b68d92cd03d77ee383b8be50a00716f1", "text": "This paper introduces Logical Semantics with Perception (LSP), a model for grounded language acquisition that learns to map natural language statements to their referents in a physical environment. For example, given an image, LSP can map the statement “blue mug on the table” to the set of image segments showing blue mugs on tables. LSP learns physical representations for both categorical (“blue,” “mug”) and relational (“on”) language, and also learns to compose these representations to produce the referents of entire statements. We further introduce a weakly supervised training procedure that estimates LSP’s parameters using annotated referents for entire statements, without annotated referents for individual words or the parse structure of the statement. We perform experiments on two applications: scene understanding and geographical question answering. We find that LSP outperforms existing, less expressive models that cannot represent relational language. We further find that weakly supervised training is competitive with fully supervised training while requiring significantly less annotation effort.", "title": "" }, { "docid": "59c24fb5b9ac9a74b3f89f74b332a27c", "text": "This paper addresses the problem of learning to map sentences to logical form, given training data consisting of natural language sentences paired with logical representations of their meaning. Previous approaches have been designed for particular natural languages or specific meaning representations; here we present a more general method. The approach induces a probabilistic CCG grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentences. We use higher-order unification to define a hypothesis space containing all grammars consistent with the training data, and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a log-linear parsing model. Experiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representations.", "title": "" }, { "docid": "da69ac86355c5c514f7e86a48320dcb3", "text": "Current approaches to semantic parsing, the task of converting text to a formal meaning representation, rely on annotated training data mapping sentences to logical forms. Providing this supervision is a major bottleneck in scaling semantic parsers. This paper presents a new learning paradigm aimed at alleviating the supervision burden. We develop two novel learning algorithms capable of predicting complex structures which only rely on a binary feedback signal based on the context of an external world. In addition we reformulate the semantic parsing problem to reduce the dependency of the model on syntactic patterns, thus allowing our parser to scale better using less supervision. Our results surprisingly show that without using any annotated meaning representations learning with a weak feedback signal is capable of producing a parser that is competitive with fully supervised parsers.", "title": "" }, { "docid": "4fbc692a4291a92c6fa77dc78913e587", "text": "Achieving artificial visual reasoning — the ability to answer image-related questions which require a multi-step, high-level process — is an important step towards artificial general intelligence. This multi-modal task requires learning a questiondependent, structured reasoning process over images from language. Standard deep learning approaches tend to exploit biases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-ofthe-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step process. Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.", "title": "" } ]
[ { "docid": "564e3f6b8deb91ab6ba096ee2b8bd0a3", "text": "A hybrid model for social media popularity prediction is proposed by combining Convolutional Neural Network (CNN) with XGBoost. The CNN model is exploited to learn high-level representations from the social cues of the data. These high-level representations are used in XGBoost to predict the popularity of the social posts. We evaluate our approach on a real-world Social Media Prediction (SMP) dataset, which consists of 432K Flickr images. The experimental results show that the proposed approach is effective, achieving the following performance: Spearman's Rho: 0.7406, MSE: 2.7293, MAE: 1.2475.", "title": "" }, { "docid": "b07cff84fd585f9ee88865e7c51171f5", "text": "Convolutional neural networks (CNN) are extensions to deep neural networks (DNN) which are used as alternate acoustic models with state-of-the-art performances for speech recognition. In this paper, CNNs are used as acoustic models for speech activity detection (SAD) on data collected over noisy radio communication channels. When these SAD models are tested on audio recorded from radio channels not seen during training, there is severe performance degradation. We attribute this degradation to mismatches between the two dimensional filters learnt in the initial CNN layers and the novel channel data. Using a small amount of supervised data from the novel channels, the filters can be adapted to provide significant improvements in SAD performance. In mismatched acoustic conditions, the adapted models provide significant improvements (about 10-25%) relative to conventional DNN-based SAD systems. These results illustrate that CNNs have a considerable advantage in fast adaptation for acoustic modeling in these settings.", "title": "" }, { "docid": "bff32fd3cc56ccf1700c9a9fd8804973", "text": "We propose a new method to compute prediction intervals. Especially for small data sets the width of a prediction interval does not only depend on the variance of the target distribution, but also on the accuracy of our estimator of the mean of the target, i.e., on the width of the confidence interval. The confidence interval follows from the variation in an ensemble of neural networks, each of them trained and stopped on bootstrap replicates of the original data set. A second improvement is the use of the residuals on validation patterns instead of on training patterns for estimation of the variance of the target distribution. As illustrated on a synthetic example, our method is better than existing methods with regard to extrapolation and interpolation in data regimes with a limited amount of data, and yields prediction intervals which actual confidence levels are closer to the desired confidence levels. 1 STATISTICAL INTERVALS In this paper we will consider feedforward neural networks for regression tasks: estimating an underlying mathematical function between input and output variables based on a finite number of data points possibly corrupted by noise. We are given a set of Pdata pairs {ifJ, t fJ } which are assumed to be generated according to t(i) = f(i) + e(i) , (1) where e(i) denotes noise with zero mean. Straightforwardly trained on such a regression task, the output of a network o(i) given a new input vector i can be RWCP: Real World Computing Partnership; SNN: Foundation for Neural Networks. Practical Confidence and Prediction Intervals 177 interpreted as an estimate of the regression f(i) , i.e ., of the mean of the target distribution given input i. Sometimes this is all we are interested in: a reliable estimate of the regression f(i). In many applications, however, it is important to quantify the accuracy of our statements. For regression problems we can distinguish two different aspects: the accuracy of our estimate of the true regression and the accuracy of our estimate with respect to the observed output. Confidence intervals deal with the first aspect, i.e. , consider the distribution of the quantity f(i) o(i), prediction intervals with the latter, i.e., treat the quantity t(i) o(i). We see from t(i) o(i) = [f(i) o(i)] + ~(i) , (2) that a prediction interval necessarily encloses the corresponding confidence interval. In [7] a method somewhat similar to ours is introduced to estimate both the mean and the variance of the target probability distribution. It is based on the assumption that there is a sufficiently large data set, i.e., that their is no risk of overfitting and that the neural network finds the correct regression. In practical applications with limited data sets such assumptions are too strict. In this paper we will propose a new method which estimates the inaccuracy of the estimator through bootstrap resampling and corrects for the tendency to overfit by considering the residuals on validation patterns rather than those on training patterns. 2 BOOTSTRAPPING AND EARLY STOPPING Bootstrapping [3] is based on the idea that the available data set is nothing but a particular realization of some unknown probability distribution. Instead of sampling over the \"true\" probability distribution , which is obviously impossible, one defines an empirical distribution. With so-called naive bootstrapping the empirical distribution is a sum of delta peaks on the available data points, each with probability content l/Pdata. A bootstrap sample is a collection of Pdata patterns drawn with replacement from this empirical probability distribution. This bootstrap sample is nothing but our training set and all patterns that do not occur in the training set are by definition part of the validation set . For large Pdata, the probability that a pattern becomes part of the validation set is (1 l/Pdata)Pdata ~ lie ~ 0.37. When training a neural network on a particular bootstrap sample, the weights are adjusted in order to minimize the error on the training data. Training is stopped when the error on the validation data starts to increase. This so-called early stopping procedure is a popular strategy to prevent overfitting in neural networks and can be viewed as an alternative to regularization techniques such as weight decay. In this context bootstrapping is just a procedure to generate subdivisions in training and validation set similar to k-fold cross-validation or subsampling. On each of the nrun bootstrap replicates we train and stop a single neural network. The output of network i on input vector i IJ is written oi(ilJ ) == or. As \"the\" estimate of our ensemble of networks for the regression f(i) we take the average output l 1 nrun m(i) == L.: oi(i). n run i=l lThis is a so-called \"bagged\" estimator [2]. In [5] it is shown that a proper balancing of the network outputs can yield even better results.", "title": "" }, { "docid": "bc2dee76b561bffeead80e74d5b8a388", "text": "BACKGROUND AND PURPOSE\nCarotid artery stenosis causes up to 10% of all ischemic strokes. Carotid endarterectomy (CEA) was introduced as a treatment to prevent stroke in the early 1950s. Carotid stenting (CAS) was introduced as a treatment to prevent stroke in 1994.\n\n\nMETHODS\nThe Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) is a randomized trial with blinded end point adjudication. Symptomatic and asymptomatic patients were randomized to CAS or CEA. The primary end point was the composite of any stroke, myocardial infarction, or death during the periprocedural period and ipsilateral stroke thereafter, up to 4 years.\n\n\nRESULTS\nThere was no significant difference in the rates of the primary end point between CAS and CEA (7.2% versus 6.8%; hazard ratio, 1.11; 95% CI, 0.81 to 1.51; P=0.51). Symptomatic status and sex did not modify the treatment effect, but an interaction with age and treatment was detected (P=0.02). Outcomes were slightly better after CAS for patients aged <70 years and better after CEA for patients aged >70 years. The periprocedural end point did not differ for CAS and CEA, but there were differences in the components, CAS versus CEA (stroke 4.1% versus 2.3%, P=0.012; and myocardial infarction 1.1% versus 2.3%, P=0.032).\n\n\nCONCLUSIONS\nIn CREST, CAS and CEA had similar short- and longer-term outcomes. During the periprocedural period, there was higher risk of stroke with CAS and higher risk of myocardial infarction with CEA. Clinical Trial Registration-www.clinicaltrials.gov. Unique identifier: NCT00004732.", "title": "" }, { "docid": "38b93f50d4fc5a1029ebedb5a544987a", "text": "We present a novel graph-based framework for timeline summarization, the task of creating different summaries for different timestamps but for the same topic. Our work extends timeline summarization to a multimodal setting and creates timelines that are both textual and visual. Our approach exploits the fact that news documents are often accompanied by pictures and the two share some common content. Our model optimizes local summary creation and global timeline generation jointly following an iterative approach based on mutual reinforcement and co-ranking. In our algorithm, individual summaries are generated by taking into account the mutual dependencies between sentences and images, and are iteratively refined by considering how they contribute to the global timeline and its coherence. Experiments on real-world datasets show that the timelines produced by our model outperform several competitive baselines both in terms of ROUGE and when assessed by human evaluators.", "title": "" }, { "docid": "f6c3124f3824bcc836db7eae1b926d65", "text": "Cloud balancing provides an organization with the ability to distribute application requests across any number of application deployments located in different data centers and through Cloud-computing providers. In this paper, we propose a load balancing methodMinsd (Minimize standard deviation of Cloud load method) and apply it on three levels control: PEs (Processing Elements), Hosts and Data Centers. Simulations on CloudSim are used to check its performance and its influence on makespan, communication overhead and throughput. A true log of a cluster also is used to test our method. Results indicate that our method not only gives good Cloud balancing but also ensures reducing makespan and communication overhead and enhancing throughput of the whole the system.", "title": "" }, { "docid": "b06653abc5e287c72fc68247610ef76a", "text": "Radio Frequency Identification (RFID) is name given to technology that uses tags, readers and backend servers to form a system that has numerous applications in many areas, much discovered and rest are to be explored. Before implementing RFID system security issues must be considered carefully, taking not care of security issues could lead to severe consequences. This paper is overview of Introduction to RFID, RFID Fundamentals, basic structure of a RFID system, some of its numerous applications, security issues and their remedies.", "title": "" }, { "docid": "93c84b6abfe30ff7355e4efc310b440b", "text": "Parallel file systems (PFS) are widely-used in modern computing systems to mask the ever-increasing performance gap between computing and data access. PFSs favor large requests, and do not work well for small requests, especially small random requests. Newer Solid State Drives (SSD) have excellent performance on small random data accesses, but also incur a high monetary cost. In this study, we propose a hybrid architecture named the Smart Selective SSD Cache (S4D-Cache), which employs a small set of SSD-based file servers as a selective cache of conventional HDD-based file servers. A novel scheme is introduced to identify performance-critical data, and conduct selective cache admission to fully utilize the hybrid architecture in terms of data-access parallelism and randomness. We have implemented an S4D-Cache under the MPI-IO and PVFS2 parallel file system. Our experiments show that S4D-Cache can significantly improve I/O throughput, and is a promising approach for parallel applications.", "title": "" }, { "docid": "a56efa3471bb9e3091fffc6b1585f689", "text": "Rogowski current transducers combine a high bandwidth, an easy to use thin flexible coil, and low insertion impedance making them an ideal device for measuring pulsed currents in power electronic applications. Practical verification of a Rogowski transducer's ability to measure current transients due to the fastest MOSFET and IGBT switching requires a calibrated test facility capable of generating a pulse with a rise time of the order of a few 10's ns. A flexible 8-module system has been built which gives a 2000A peak current with a rise time of 40ns. The modular approach enables verification for a range of transducer coil sizes and ratings.", "title": "" }, { "docid": "e00ba988f473d4729f2e593171e15185", "text": "To achieve more effective solution for large-scale image classification (i.e., classifying millions of images into thousands or even tens of thousands of object classes or categories), a deep multi-task learning algorithm is developed by seamlessly integrating deep CNNs with multi-task learning over the concept ontology, where the concept ontology is used to organize large numbers of object classes or categories hierarchically and determine the inter-related learning tasks automatically. Our deep multi-task learning algorithm can integrate the deep CNNs to learn more discriminative high-level features for image representation, and it can also leverage multi-task learning and inter-level relationship constraint to train more discriminative tree classifier over the concept ontology and control the inter-level error propagation effectively. In our deep multi-task learning algorithm, we can use back propagation to simultaneously refine both the relevant node classifiers (at different levels of the concept ontology) and the deep CNNs according to a joint objective function. The experimental results have demonstrated that our deep multi-task learning algorithm can achieve very competitive results on both the accuracy and the cost of feature extraction for large-scale image classification.", "title": "" }, { "docid": "df70d4849e3fef9348c733c47400dee8", "text": "Three studies investigated the relationships among employees' perception of supervisor support (PSS), perceived organizational support (POS), and employee turnover. Study 1 found, with 314 employees drawn from a variety of organizations, that PSS was positively related to temporal change in POS, suggesting that PSS leads to POS. Study 2 established, with 300 retail sales employees, that the PSS-POS relationship increased with perceived supervisor status in the organization. Study 3 found, with 493 retail sales employees, evidence consistent with the view that POS completely mediated a negative relationship between PSS and employee turnover. These studies suggest that supervisors, to the extent that they are identified with the organization, contribute to POS and, ultimately, to job retention.", "title": "" }, { "docid": "ca506dc4fe8e97d93e2602f08248269c", "text": "0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.11.022 ⇑ Corresponding author. Tel.: +82 2 958 3598; fax: E-mail addresses: Hau.Augustine@gmail.com (Y.S t.ac.kr (Y.-G. Kim). 1 Tel.: +82 2 958 3614; fax: 82 2 958 3599. The user community has been an important external source of a firm’s product or service innovation. Users’ innovation-conducive knowledge sharing enables the community to work as a vital source of innovation. But, traditional economic theories of innovation seem to provide few explanations about why such knowledge sharing takes place for free in the user community. Therefore, this study investigates what drives community users to freely share their innovation-conducive knowledge, using the theory of planned behavior. Based on an empirical analysis of the data from 1244 members of a South Korean online game user community, it reveals that intrinsic motivation, shared goals, and social trust are salient factors in promoting users’ innovation-conducive knowledge sharing. Extrinsic motivation and social tie, however, were found to affect such sharing adversely, contingent upon whether a user is an innovator or a non-innovator. The study illustrates how social capital, in addition to individual motivations, forms and influences users’ innovation-conducive knowledge sharing in the online gaming context. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bee4d4ba947d87b86abc02852c39d2b3", "text": "Aim\nThe study assessed the documentation of nursing care before, during and after the Standardized Nursing Language Continuing Education Programme (SNLCEP). It evaluates the differences in documentation of nursing care in different nursing specialty areas and assessed the influence of work experience on the quality of documentation of nursing care with a view to provide information on documentation of nursing care. The instrument used was an adapted scoring guide for nursing diagnosis, nursing intervention and nursing outcome (Q-DIO).\n\n\nDesign\nRetrospective record reviews design was used.\n\n\nMethods\nA total of 270 nursing process booklets formed the sample size. From each ward, 90 booklets were selected in this order: 30 booklets before the SNLCEP, 30 booklets during SNLCEP and 30 booklets after SNLCEP.\n\n\nResults\nOverall, the study concluded that the SNLCEP had a significant effect on the quality of documentation of nursing care using Standardized Nursing Languages.", "title": "" }, { "docid": "36bc32033cbecf8ee00c5ec84ef26cfa", "text": "Most of the device's technology has been moving towards the complex and produce of Nano-IC with demands for cheaper cost, smaller size and better thermal and electrical performance. One of the marketable packages is Quad Flat No-Lead (QFN) package. Due to the high demand of miniaturization of electronic products, QFN development becomes more promising, such as the lead frame design with half edge, cheaper tape, shrinkage of package size as to achieve more units per lead frame (cost saving) and etc [1]. The improvement methods in the lead frame design, such as lead frame metal tie bar and half edge features are always the main challenges for QFN package. With reduced the size of metal tie bar, it will fasten the package singulation process, whereas the half edge is designed for the mold compound locking for delamination reduction purpose. This paper specifically will discuss how the critical wire bonding parameters, capillary design and environmental conditions interact each other result to the unstable leads (second bond failures). During the initial evaluation of new package SOT1261 with rough PPF lead frame, several short tails and fish tails observed on wedge bond when applied with the current parameter setting which have been qualified in other packages with same wire size (18um Au wire). These problems did not surface out in earlier qualified devices mainly due to the second bond parameter robustness, capillary designs, lead frame design changes, different die packages, lead frame batches and contamination levels. One of the main root cause been studied is the second bond parameter setting which is not robust enough for the flimsy lead frame. The new bonding methodology, with the concept of low base ultrasonic and high force setting applied together with scrubbing mechanism to eliminate the fish tail bond and also reduce short tail occurrence on wedge. Wire bond parameters optimized to achieve zero fish tail, and wedge pull reading with >4.0gf. Destructive test such as wedge pull test used to test the bonding quality. Failure modes are analyzed using high power optical scope microscope and Scanning Electronic Microscope (SEM). By looking through into all possible root causes, and identifying how the factors are interacting, some efforts on the Design of Experiments (DOE) are carried out and good solutions were implemented.", "title": "" }, { "docid": "51ec3dee7a91b7e9afcb26694ded0c11", "text": "[1] PRIMELT2.XLS software is introduced for calculating primary magma composition and mantle potential temperature (TP) from an observed lava composition. It is an upgrade over a previous version in that it includes garnet peridotite melting and it detects complexities that can lead to overestimates in TP by >100 C. These are variations in source lithology, source volatile content, source oxidation state, and clinopyroxene fractionation. Nevertheless, application of PRIMELT2.XLS to lavas from a wide range of oceanic islands reveals no evidence that volatile-enrichment and source fertility are sufficient to produce them. All are associated with thermal anomalies, and this appears to be a prerequisite for their formation. For the ocean islands considered in this work, TP maxima are typically 1450–1500 C in the Atlantic and 1500–1600 C in the Pacific, substantially greater than 1350 C for ambient mantle. Lavas from the Galápagos Islands and Hawaii record in their geochemistry high TP maxima and large ranges in both TP and melt fraction over short horizontal distances, a result that is predicted by the mantle plume model.", "title": "" }, { "docid": "5a139ec7dd49b15b49080f0ff07c22e7", "text": "Modern thin-client systems are designed to provide the same graphical interfaces and applications available on traditional desktop computers while centralizing administration and allowing more efficient use of computing resources. Despite the rapidly increasing popularity of these client-server systems, there are few reliable analyses of their performance. Industry standard benchmark techniques commonly used for measuring desktop system performance are ill-suited for measuring the performance of thin-client systems because these benchmarks only measure application performance on the server, not the actual user-perceived performance on the client. To address this problem, we have developed slow-motion benchmarking, a new measurement technique for evaluating thin-client systems. In slow-motion benchmarking, performance is measured by capturing network packet traces between a thin client and its respective server during the execution of a slow-motion version of a conventional benchmark application. These results can then be used either independently or in conjunction with conventional benchmark results to yield an accurate and objective measure of the performance of thin-client systems. We have demonstrated the effectiveness of slow-motion benchmarking by using this technique to measure the performance of several popular thin-client systems in various network environments on Web and multimedia workloads. Our results show that slow-motion benchmarking solves the problems with using conventional benchmarks on thin-client systems and is an accurate tool for analyzing the performance of these systems.", "title": "" }, { "docid": "80576ea7e8c52465cec9094990bf7243", "text": "Nowadays, classifying sentiment from social media has been a strategic thing since people can express their feeling about something in an easy way and short text. Mining opinion from social media has become important because people are usually honest with their feeling on something. In our research, we tried to identify the problems of classifying sentiment from Indonesian social media. We identified that people tend to express their opinion in text while the emoticon is rarely used and sometimes misleading. We also identified that the Indonesian social media opinion can be classified not only to positive, negative, neutral and question but also to a special mix case between negative and question type. Basically there are two levels of problem: word level and sentence level. Word level problems include the usage of punctuation mark, the number usage to replace letter, misspelled word and the usage of nonstandard abbreviation. In sentence level, the problem is related with the sentiment type such as mentioned before. In our research, we built a sentiment classification system which includes several steps such as text preprocessing, feature extraction, and classification. The text preprocessing aims to transform the informal text into formal text. The word formalization method in that we use is the deletion of punctuation mark, the tokenization, conversion of number to letter, the reduction of repetition letter, and using corpus with Levensthein to formalize abbreviation. The sentence formalization method that we use is negation handling, sentiment relative, and affixes handling. Rule-based, SVM and Maximum Entropy are used as the classification algorithms with features of count of positive, negative, and question word in sentence and bigram. From our experimental result, the best classification method is SVM that yields 83.5% accuracy.", "title": "" }, { "docid": "42979dd6ad989896111ef4de8d26b2fb", "text": "Online dating services let users expand their dating pool beyond their social network and specify important characteristics of potential partners. To assess compatibility, users share personal information — e.g., identifying details or sensitive opinions about sexual preferences or worldviews — in profiles or in one-on-one communication. Thus, participating in online dating poses inherent privacy risks. How people reason about these privacy risks in modern online dating ecosystems has not been extensively studied. We present the results of a survey we designed to examine privacy-related risks, practices, and expectations of people who use or have used online dating, then delve deeper using semi-structured interviews. We additionally analyzed 400 Tinder profiles to explore how these issues manifest in practice. Our results reveal tensions between privacy and competing user values and goals, and we demonstrate how these results can inform future designs.", "title": "" }, { "docid": "2cc8e89fcb5bbcde3b8ba3f0bf8288f4", "text": "Non-orthogonal multiple access is one of the key techniques developed for the future 5G communication systems among which, the recent proposed sparse code multiple access (SCMA) has attracted a lots of researchers' interests. By exploring the shaping gain of the multi-dimensional complex codewords, SCMA is shown to have a better performance compared with some other non-orthogonal schemes such as low density signature (LDS). However, although the sparsity of the codewords makes the near optimal message passing algorithm (MPA) possible, the decoding complexity is still very high. In this paper, we propose a low complexity decoding algorithm based on list sphere decoding. Complexity analysis and simulation results show that the proposed algorithm can reduce the computational complexity substantially while achieve the near maximum likelihood (ML) performance.", "title": "" } ]
scidocsrr
dae1e4d37fbc1382dda7792d6b664ae8
DEVELOPMENT OF A NOVEL UWB VIVALDI ANTENNA ARRAY USING SIW TECHNOLOGY
[ { "docid": "270f1ea2c5626545c64f61a105670535", "text": "An antipodal Vivaldi antenna gives good performance over a wide bandwidth, but the cross-polarization level is unfortunately high due to the slot flares on different layers. As a simple technique to reduce the cross-polarization level in the array, the antipodal antenna and its mirror image are placed alternately in the H-plane. The cross-polarization level is reduced more than 20 dB at broadside.", "title": "" }, { "docid": "40252c2047c227fbbeee4d492bee9bc6", "text": "A planar integrated multi-way broadband SIW power divider is proposed. It can be combined by the fundamental modules of T-type or Y-type two-way power dividers and an SIW bend directly. A sixteen way SIW power divider prototype was designed, fabricated and measured. The whole structure is made by various metallic-vias on the same substrate. Hence, it can be easily fabricated and conveniently integrated into microwave and millimeter-wave integrated circuits for mass production with low cost and small size.", "title": "" } ]
[ { "docid": "e3a7b1302e70b003acac4c15057908a7", "text": "modeling business processes a petri net-oriented approach modeling business processes a petri net oriented approach modeling business processes: a petri net-oriented approach modeling business processes a petri net oriented approach modeling business processes a petri net oriented approach modeling business processes: a petri net-oriented approach modeling business processes a petri net oriented approach a petri net-based software process model for developing modeling business processes a petri net oriented approach petri nets and business process management dagstuhl modeling business processes a petri net oriented approach killer app for petri nets process mining a petri net approach to analysis and composition of web information gathering and process modeling in a petri net modeling business processes a petri net oriented approach an ontology-based evaluation of process modeling with business process modeling in inspire using petri nets document about nc fairlane manuals is available on print from business process modeling to the specification of modeling of adaptive cyber physical systems using aspect petri net theory and the modeling of systems tbsh towards agent-based modeling and verification of a discussion of object-oriented process modeling modeling and simulation versions of business process using workflow modeling for virtual enterprise: a petri net simulation of it service processes with petri-nets george mason university the volgenau school of engineering process-oriented business performance management with syst 620 / ece 673 discrete event systems general knowledge questions answers on india tool-based business process modeling using the som approach income/w f a petri net based approach to w orkflow segment 2 exam study guide world history jbacs specifying business processes over objects rd.springer english june exam 2013 question paper 3 nulet", "title": "" }, { "docid": "70991373ae71f233b0facd2b5dd1a0d3", "text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.", "title": "" }, { "docid": "7acdc25c20b4aa16fc3391cb878a9577", "text": "Recurrent Neural Networks (RNNs) have long been recognized for their potential to model complex time series. However, it remains to be determined what optimization techniques and recurrent architectures can be used to best realize this potential. The experiments presented take a deep look into Hessian free optimization, a powerful second order optimization method that has shown promising results, but still does not enjoy widespread use. This algorithm was used to train to a number of RNN architectures including standard RNNs, long short-term memory, multiplicative RNNs, and stacked RNNs on the task of character prediction. The insights from these experiments led to the creation of a new multiplicative LSTM hybrid architecture that outperformed both LSTM and multiplicative RNNs. When tested on a larger scale, multiplicative LSTM achieved character level modelling results competitive with the state of the art for RNNs using very different methodology.", "title": "" }, { "docid": "5b76f50ef9745ef03205d3657e6fd3cd", "text": "In this paper we present preliminary results and future directions of work for a project in which we are building an RFID based system to sense and monitor free weight exercises.", "title": "" }, { "docid": "7579ea317e216e80bcd08eabb4615711", "text": "This paper presents an ultra low power clock source using a 1μW temperature compensated on-chip digitally controlled oscillator (Osc<sub>CMP</sub>) and a 100nW uncompensated oscillator (Osc<sub>UCMP</sub>) with respective temperature stabilities of 5ppm/°C and 1.67%/°C. A fast locking circuit re-locks Osc<sub>UCMP</sub> to Osc<sub>CMP</sub> often enough to achieve a high effective temperature stability. Measurements of a 130nm CMOS chip show that this combination gives a stability of 5ppm/°C from 20°C to 40°C (14ppm/°C from 20°C to 70°C) at 150nW if temperature changes by 1°C or less every second. This result is 7X lower power than typical XTALs and 6X more stable than prior on-chip solutions.", "title": "" }, { "docid": "3fd52b589a58f449ab1c03a19a034a2d", "text": "This paper presents a low-power high-bit-rate phase modulator based on a digital PLL with single-bit TDC and two-point injection scheme. At high bit rates, this scheme requires a controlled oscillator with wide tuning range and becomes critically sensitive to the delay spread between the two injection paths, considerably degrading the achievable error-vector magnitude and causing significant spectral regrowth. A multi-capacitor-bank oscillator topology with an automatic background regulation of the gains of the banks and a digital adaptive filter for the delay-spread correction are introduced. The phase modulator fabricated in a 65-nm CMOS process synthesizes carriers in the 2.9-to-4.0-GHz range from a 40-MHz crystal reference and it is able to produce a phase change up to ±π with 10-bit resolution in a single reference cycle. Measured EVM at 3.6 GHz is -36 dB for a 10-Mb/s GMSK and a 20-Mb/s QPSK modulation. Power dissipation is 5 mW from a 1.2-V voltage supply, leading to a total energy consumption of 0.25 nJ/bit.", "title": "" }, { "docid": "47bae1df7bc512e8a458122892e145f8", "text": "This paper presents an inertial-measurement-unit-based pen (IMUPEN) and its associated trajectory reconstruction algorithm for motion trajectory reconstruction and handwritten digit recognition applications. The IMUPEN is composed of a triaxial accelerometer, two gyroscopes, a microcontroller, and an RF wireless transmission module. Users can hold the IMUPEN to write numerals or draw simple symbols at normal speed. During writing or drawing movements, the inertial signals generated for the movements are transmitted to a computer via the wireless module. A trajectory reconstruction algorithm composed of the procedures of data collection, signal preprocessing, and trajectory reconstruction has been developed for reconstructing the trajectories of movements. In order to minimize the cumulative errors caused by the intrinsic noise/drift of sensors, we have developed an orientation error compensation method and a multiaxis dynamic switch. The advantages of the IMUPEN include the following: 1) It is portable and can be used anywhere without any external reference device or writing ambit limitations, and 2) its trajectory reconstruction algorithm can reduce orientation and integral errors effectively and thus can reconstruct the trajectories of movements accurately. Our experimental results on motion trajectory reconstruction and handwritten digit recognition have successfully validated the effectiveness of the IMUPEN and its trajectory reconstruction algorithm.", "title": "" }, { "docid": "a3ba14c4746daeedb0740893447af9ca", "text": "The present model outlines the mechanisms underlying habitual control of responding and the ways in which habits interface with goals. Habits emerge from the gradual learning of associations between responses and the features of performance contexts that have historically covaried with them (e.g., physical settings, preceding actions). Once a habit is formed, perception of contexts triggers the associated response without a mediating goal. Nonetheless, habits interface with goals. Constraining this interface, habit associations accrue slowly and do not shift appreciably with current goal states or infrequent counterhabitual responses. Given these constraints, goals can (a) direct habits by motivating repetition that leads to habit formation and by promoting exposure to cues that trigger habits, (b) be inferred from habits, and (c) interact with habits in ways that preserve the learned habit associations. Finally, the authors outline the implications of the model for habit change, especially for the self-regulation of habit cuing.", "title": "" }, { "docid": "cd09b6c39b4f74a0d19656e6789313f7", "text": "We address two shortcomings of the common spatial patterns (CSP) algorithm for spatial filtering in the context of brain-computer interfaces (BCIs) based on electroencephalography/magnetoencephalography (EEG/MEG): First, the question of optimality of CSP in terms of the minimal achievable classification error remains unsolved. Second, CSP has been initially proposed for two-class paradigms. Extensions to multiclass paradigms have been suggested, but are based on heuristics. We address these shortcomings in the framework of information theoretic feature extraction (ITFE). We show that for two-class paradigms, CSP maximizes an approximation of mutual information of extracted EEG/MEG components and class labels. This establishes a link between CSP and the minimal classification error. For multiclass paradigms, we point out that CSP by joint approximate diagonalization (JAD) is equivalent to independent component analysis (ICA), and provide a method to choose those independent components (ICs) that approximately maximize mutual information of ICs and class labels. This eliminates the need for heuristics in multiclass CSP, and allows incorporating prior class probabilities. The proposed method is applied to the dataset IIIa of the third BCI competition, and is shown to increase the mean classification accuracy by 23.4% in comparison to multiclass CSP.", "title": "" }, { "docid": "3a3470d13c9c63af1a62ee7bc57a96ef", "text": "Cloud computing is a distributed computing model that still faces problems. New ideas emerge to take advantage of its features and among the research challenges found in the cloud, we can highlight Identity and Access Management. The main problems of the application of access control in the cloud are the necessary flexibility and scalability to support a large number of users and resources in a dynamic and heterogeneous environment, with collaboration and information sharing needs. This paper proposes the use of risk-based dynamic access control for cloud computing. The proposal is presented as an access control model based on an extension of the XACML standard with three new components: the Risk Engine, the Risk Quantification Web Services and the Risk Policies. The risk policies present a method to describe risk metrics and their quantification, using local or remote functions. The risk policies allow users and cloud service providers to define how to handle risk-based access control for their resources, using different quantification and aggregation methods. The model reaches the access decision based on a combination of XACML decisions and risk analysis. A prototype of the model is implemented, showing it has enough expressivity to describe the models of related work. In the experimental results, the prototype takes between 2 and 6 milliseconds to reach access decisions using a risk policy. A discussion on the security aspects of the model is also presented.", "title": "" }, { "docid": "c0e2d1740bbe2c40e7acf262cb658ea2", "text": "The quest for algorithms that enable cognitive abilities is an important part of machine learning. A common trait in many recently investigated cognitive-like tasks is that they take into account different data modalities, such as visual and textual input. In this paper we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. We show that high-order correlations effectively direct the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our high-order attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the standard VQA dataset.", "title": "" }, { "docid": "424cf46098c6b5f315f955c34596976f", "text": "This report describes a query expansion method based on the expansion of geographical terms by means of WordNet synonyms and meronyms. We used this method for our participation to the GeoCLEF 2005 English monolingual task, while using the well-known Lucene search engine for indexing and retrieval. The obtained results show that the proposed method was not suitable for the GeoCLEF track, while WordNet can be used in a more effective way during the indexing phase, by adding synonyms and holonyms to the", "title": "" }, { "docid": "33d65d9ae8575d9de3b6a7cf0c30db37", "text": "The prediction of collisions amongst N rigid objects may be reduced to a series of computations of the time to first contact for all pairs of objects. Simple enclosing bounds and hierarchical partitions of the space-time domain are often used to avoid testing object-pairs that clearly will not collide. When the remaining pairs involve only polyhedra under straight-line translation, the exact computation of the collision time and of the contacts requires only solving for intersections between linear geometries. When a pair is subject to a more general relative motion, such a direct collision prediction calculation may be intractable. The popular brute force collision detection strategy of executing the motion for a series of small time steps and of checking for static interferences after each step is often computationally prohibitive. We propose instead a less expensive collision prediction strategy, where we approximate the relative motion between pairs of objects by a sequence of screw motion segments, each defined by the relative position and orientation of the two objects at the beginning and at the end of the segment. We reduce the computation of the exact collision time and of the corresponding face/vertex and edge/edge collision points to the numeric extraction of the roots of simple univariate analytic functions. Furthermore, we propose a series of simple rejection tests, which exploit the particularity of the screw motion to immediately decide that some objects do not collide or to speed-up the prediction of collisions by about 30%, avoiding on average 3/4 of the root-finding queries even when the object actually collide.", "title": "" }, { "docid": "d487d83c805114cb36be664e48e3a588", "text": "Although motor imagery is widely used for motor learning in rehabilitation and sports training, the underlying mechanisms are still poorly understood. Based on fMRI data sets acquired with very high temporal resolution (300 ms) under motor execution and imagery conditions, we utilized Dynamic Causal Modeling (DCM) to determine effective connectivity measures between supplementary motor area (SMA) and primary motor cortex (M1). A set of 28 models was tested in a Bayesian framework and the by-far best-performing model revealed a strong suppressive influence of the motor imagery condition on the forward connection between SMA and M1. Our results clearly indicate that the lack of activation in M1 during motor imagery is caused by suppression from the SMA. These results highlight the importance of the SMA not only for the preparation and execution of intended movements, but also for suppressing movements that are represented in the motor system but not to be performed.", "title": "" }, { "docid": "da7fc676542ccc6f98c36334d42645ae", "text": "Extracting the defects of the road pavement in images is difficult and, most of the time, one image is used alone. The difficulties of this task are: illumination changes, objects on the road, artefacts due to the dynamic acquisition. In this work, we try to solve some of these problems by using acquisitions from different points of view. In consequence, we present a new methodology based on these steps : the detection of defects in each image, the matching of the images and the merging of the different extractions. We show the increase in performances and more particularly how the false detections are reduced.", "title": "" }, { "docid": "d58f60013b507b286fcfc9f19304fea6", "text": "The outcome of patients suffering from spondyloarthritis is determined by chronic inflammation and new bone formation leading to ankylosis. The latter process manifests by new cartilage and bone formation leading to joint or spine fusion. This article discusses the main mechanisms of new bone formation in spondyloarthritis. It reviews the key molecules and concepts of new bone formation and ankylosis in animal models of disease and translates these findings to human disease. In addition, proposed biomarkers of new bone formation are evaluated and the translational current and future challenges are discussed with regards to new bone formation in spondyloarthritis.", "title": "" }, { "docid": "16d6862cf891e5219aae10d5fcd6ce92", "text": "This paper describes the Power System Analysis Toolbox (PSAT), an open source Matlab and GNU/Octave-based software package for analysis and design of small to medium size electric power systems. PSAT includes power flow, continuation power flow, optimal power flow, small-signal stability analysis, and time-domain simulation, as well as several static and dynamic models, including nonconventional loads, synchronous and asynchronous machines, regulators, and FACTS. PSAT is also provided with a complete set of user-friendly graphical interfaces and a Simulink-based editor of one-line network diagrams. Basic features, algorithms, and a variety of case studies are presented in this paper to illustrate the capabilities of the presented tool and its suitability for educational and research purposes.", "title": "" }, { "docid": "b1ae52dfa5ed1bb9c835816ca3fd52b4", "text": "The use of the halide-sensitive fluorescent probes (6-methoxy-N-(-sulphopropyl)quinolinium (SPQ) and N-(ethoxycarbonylmethyl)-6-methoxyquinolinium bromide (MQAE)) to measure chloride transport in cells has now been established as an alternative to the halide-selective electrode technique, radioisotope efflux assays and patch-clamp electrophysiology. We report here procedures for the assessment of halide efflux, using SPQ/MQAE halide-sensitive fluorescent indicators, from both adherent cultured epithelial cells and freshly obtained primary human airway epithelial cells. The procedure describes the calculation of efflux rate constants using experimentally derived SPQ/MQAE fluorescence intensities and empirically derived Stern-Volmer calibration constants. These fluorescence methods permit the quantitative analysis of CFTR function.", "title": "" }, { "docid": "160058dae12ea588352f5015483081fc", "text": "Semiotics is the study of signs. Signs take the form of words, images, sounds, odours, flavours, acts or objects but such things have no intrinsic meaning and become signs only when we invest them with meaning. ‘Nothing is a sign unless it is interpreted as a sign,’ declares Peirce (Peirce, 1931). The two dominant models of a sign are the linguist Ferdinand de Saussure and the philosopher Charles Sanders Peirce. This paper attempts to study the role of semiotics in linguistics. How signs play an important role in studying the language? Index: Semioticstheory of signs and symbols Semanticsstudy of sentences Denotataan actual object referred to by a linguistic expression Divergentmove apart in different directions Linguisticsscientific study of language --------------------------------------------------------------------------------------------Introduction: Semiotics or semiology is the study of sign processes or signification and communication, signs and symbols. It is divided into the three following branches:  Semantics: Relation between signs and the things to which they refer; their denotata  Syntactics: Relations among signs in formal structures  Pragmatics: Relation between signs and their effects on people who use them Syntactics is the branch of semiotics that deals with the formal properties of signs and symbols. It deals with the rules that govern how words are combined to form phrases and sentences. According to Charles Morris “semantics deals with the relation of signs to their designate and the objects which they may or do denote” (Foundations of the theory of science, 1938); and, pragmatics deals with the biotic aspects of semiosis, that is, with all the psychological, biological and sociological phenomena which occur in the functioning of signs. The term, which was spelled semeiotics was first used in English by Henry Stubbes in a very precise sense to denote the branch of medical science relating to the interpretation of signs. Semiotics is not widely institutionalized as an academic discipline. It is a field of study involving many different theoretical stances and methodological tools. One of the broadest definitions is that of Umberto Eco, who states that ‘semiotics is concerned with everything that can be taken as a sign’ (A Theory of Semiotics, 1979). Semiotics involves the study not only of what we refer to as ‘signs’ in everyday speech, but of anything which ‘stands for’ something else. In a semiotic sense, signs take the form of words, images, sounds, gestures and objects. Whilst for the linguist Saussure, ‘semiology’ was ‘a science which studies the role of signs as part of social life’, (Nature of the linguistics sign, 1916) for the philosopher Charles Pierce ‘semiotic’ was the ‘formal doctrine of signs’ which was closely related to logic. For him, ‘a sign... is something which stands to somebody for something in some respect or capacity’. He declared that ‘every thought is a sign.’ Literature review: Semiotics is often employed in the analysis of texts, although it is far more than just a mode of textual analysis. Here it should perhaps be noted that a ‘text’ can IJSER International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 2135", "title": "" } ]
scidocsrr
135718a11c5c576342081b3d7822feb8
Sentiment Analysis in Twitter: From Classification to Quantification of Sentiments within Tweets
[ { "docid": "6a96678b14ec12cb4bb3db4e1c4c6d4e", "text": "Emoticons are widely used to express positive or negative sentiment on Twitter. We report on a study with live users to determine whether emoticons are used to merely emphasize the sentiment of tweets, or whether they are the main elements carrying the sentiment. We found that the sentiment of an emoticon is in substantial agreement with the sentiment of the entire tweet. Thus, emoticons are useful as predictors of tweet sentiment and should not be ignored in sentiment classification. However, the sentiment expressed by an emoticon agrees with the sentiment of the accompanying text only slightly better than random. Thus, using the text accompanying emoticons to train sentiment models is not likely to produce the best results, a fact that we show by comparing lexicons generated using emoticons with others generated using simple textual features.", "title": "" }, { "docid": "48b1fdb9343aee6582f11013d63667de", "text": "Most of the state of the art works and researches on the automatic sentiment analysis and opinion mining of texts collected from social networks and microblogging websites are oriented towards the classification of texts into positive and negative. In this paper, we propose a pattern-based approach that goes deeper in the classification of texts collected from Twitter (i.e., tweets). We classify the tweets into 7 different classes; however the approach can be run to classify into more classes. Experiments show that our approach reaches an accuracy of classification equal to 56.9% and a precision level of sentimental tweets (other than neutral and sarcastic) equal to 72.58%. Nevertheless, the approach proves to be very accurate in binary classification (i.e., classification into “positive” and “negative”) and ternary classification (i.e., classification into “positive”, “negative” and “neutral”): in the former case, we reach an accuracy of 87.5% for the same dataset used after removing neutral tweets, and in the latter case, we reached an accuracy of classification of 83.0%.", "title": "" } ]
[ { "docid": "f67e221a12e0d8ebb531a1e7c80ff2ff", "text": "Fine-grained image classification is to recognize hundreds of subcategories belonging to the same basic-level category, such as 200 subcategories belonging to the bird, which is highly challenging due to large variance in the same subcategory and small variance among different subcategories. Existing methods generally first locate the objects or parts and then discriminate which subcategory the image belongs to. However, they mainly have two limitations: 1) relying on object or part annotations which are heavily labor consuming; and 2) ignoring the spatial relationships between the object and its parts as well as among these parts, both of which are significantly helpful for finding discriminative parts. Therefore, this paper proposes the object-part attention model (OPAM) for weakly supervised fine-grained image classification and the main novelties are: 1) object-part attention model integrates two level attentions: object-level attention localizes objects of images, and part-level attention selects discriminative parts of object. Both are jointly employed to learn multi-view and multi-scale features to enhance their mutual promotion; and 2) Object-part spatial constraint model combines two spatial constraints: object spatial constraint ensures selected parts highly representative and part spatial constraint eliminates redundancy and enhances discrimination of selected parts. Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories. Importantly, neither object nor part annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Compared with more than ten state-of-the-art methods on four widely-used datasets, our OPAM approach achieves the best performance.", "title": "" }, { "docid": "97a1453d230df4f8c57eed1d3a1aaa19", "text": "In this letter, an isolation improvement method between two closely packed planar inverted-F antennas (PIFAs) is proposed via a miniaturized ground slot with a chip capacitor. The proposed T-shaped ground slot acts as a notch filter, and the capacitor is utilized to reduce the slot length. The equivalent circuit model of the proposed slot with the capacitor is derived. The measured isolation between two PIFAs is down to below -20 dB at the whole WLAN band of 2.4 GHz.", "title": "" }, { "docid": "0bf88df55230271c61966f90485cde00", "text": "BACKGROUND\nNewer approaches for understanding suicidal behavior suggest the assessment of suicide-specific beliefs and cognitions may improve the detection and prediction of suicidal thoughts and behaviors. The Suicide Cognitions Scale (SCS) was developed to measure suicide-specific beliefs, but it has not been tested in a military setting.\n\n\nMETHODS\nData were analyzed from two separate studies conducted at three military mental health clinics (one U.S. Army, two U.S. Air Force). Participants included 175 active duty Army personnel with acute suicidal ideation and/or a recent suicide attempt referred for a treatment study (Sample 1) and 151 active duty Air Force personnel receiving routine outpatient mental health care (Sample 2). In both samples, participants completed self-report measures and clinician-administered interviews. Follow-up suicide attempts were assessed via clinician-administered interview for Sample 1. Statistical analyses included confirmatory factor analysis, between-group comparisons by history of suicidality, and generalized regression modeling.\n\n\nRESULTS\nTwo latent factors were confirmed for the SCS: Unloveability and Unbearability. Each demonstrated good internal consistency, convergent validity, and divergent validity. Both scales significantly predicted current suicidal ideation (βs >0.316, ps <0.002) and significantly differentiated suicide attempts from nonsuicidal self-injury and control groups (F(6, 286)=9.801, p<0.001). Both scales significantly predicted future suicide attempts (AORs>1.07, ps <0.050) better than other risk factors.\n\n\nLIMITATIONS\nSelf-report methodology, small sample sizes, predominantly male samples.\n\n\nCONCLUSIONS\nThe SCS is a reliable and valid measure that predicts suicidal ideation and suicide attempts among military personnel better than other well-established risk factors.", "title": "" }, { "docid": "88acb55335bc4530d8dfe5f44738d39f", "text": "Driving is an attention-demanding task, especially with children in the back seat. While most recommendations prefer to reduce children's screen time in common entertainment systems, e.g. DVD players and tablets, parents often rely on these systems to entertain the children during car trips. These systems often lack key components that are important for modern parents, namely, sociability and educational content. In this contribution we introduce PANDA, a parental affective natural driving assistant. PANDA is a virtual in-car entertainment agent that can migrate around the car to interact with the parent-driver or with children in the back seat. PANDA supports the parent-driver via speech interface, helps to mediate her interaction with children in the back seat, and works to reduce distractions for the driver while also engaging, entertaining and educating children. We present the design of PANDA system and preliminary tests of the prototype system in a car setting.", "title": "" }, { "docid": "04097beae36a8414cf53d8418db745ab", "text": "Accurate terrain estimation is critical for autonomous offroad navigation. Reconstruction of a 3D surface allows rough and hilly ground to be represented, yielding faster driving and better planning and control. However, data from a 3D sensor samples the terrain unevenly, quickly becoming sparse at longer ranges and containing large voids because of occlusions and inclines. The proposed approach uses online kernel-based learning to estimate a continuous surface over the area of interest while providing upper and lower bounds on that surface. Unlike other approaches, visibility information is exploited to constrain the terrain surface and increase precision, and an efficient gradient-based optimization allows for realtime implementation.", "title": "" }, { "docid": "e4a14229d3a10356f6b10ac0c19c8ec7", "text": "The Programmer's Learning Machine (PLM) is an interactive exerciser for learning programming and algorithms. Using an integrated and graphical environment that provides a short feedback loop, it allows students to learn in a (semi)-autonomous way. This generic platform also enables teachers to create specific programming microworlds that match their teaching goals. This paper discusses our design goals and motivations, introduces the existing material and the proposed microworlds, and details the typical use cases from the student and teacher point of views.", "title": "" }, { "docid": "fdd33f6248bef5837ea322305d9a0549", "text": "Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence or even a multi-round dialogue. There are three main challenges in VG: 1) what is the main focus in a query; 2) how to understand an image; 3) how to locate an object. Most existing methods combine all the information curtly, which may suffer from the problem of information redundancy (i.e. ambiguous query, complicated image and a large number of objects). In this paper, we formulate these challenges as three attention problems and propose an accumulated attention (A-ATT) mechanism to reason among them jointly. Our A-ATT mechanism can circularly accumulate the attention for useful information in image, query, and objects, while the noises are ignored gradually. We evaluate the performance of A-ATT on four popular datasets (namely Refer-COCO, ReferCOCO+, ReferCOCOg, and Guesswhat?!), and the experimental results show the superiority of the proposed method in term of accuracy.", "title": "" }, { "docid": "0ab220829ea6667549ca274eaedb2a9e", "text": "In a culture where collectivism is pervasive such as China, social norms can be one of the most powerful tools to influence consumers’ behavior. Individuals are driven to meet social expectations and fulfill social roles in collectivist cultures. Therefore, this study was designed to investigate how Chinese consumers’ concern with saving face affects sustainable fashion product purchase intention and how it also moderates consumers’ commitment to sustainable fashion. An empirical data set of 469 undergraduate students in Beijing and Shanghai was used to test our hypotheses. Results confirmed that face-saving is an important motivation for Chinese consumers’ purchase of sustainable fashion items, and it also attenuated the effect of general product value while enhancing the effect of products’ green value in predicting purchasing trends. The findings contribute to the knowledge of sustainable consumption in Confucian culture, and thus their managerial implications were also discussed.", "title": "" }, { "docid": "6a08787a6f87d79d5ebca20569706c59", "text": "Recently published methods enable training of bitwise neural networks which allow reduced representation of down to a single bit per weight. We present a method that exploits ensemble decisions based on multiple stochastically sampled network models to increase performance figures of bitwise neural networks in terms of classification accuracy at inference. Our experiments with the CIFAR-10 and GTSRB datasets show that the performance of such network ensembles surpasses the performance of the high-precision base model. With this technique we achieve 5.81% best classification error on CIFAR-10 test set using bitwise networks. Concerning inference on embedded systems we evaluate these bitwise networks using a hardware efficient stochastic rounding procedure. Our work contributes to efficient embedded bitwise neural networks.", "title": "" }, { "docid": "1caaac35c25cd9efb729b57e59c41be5", "text": "The design of elastic file synchronization services like Dropbox is an open and complex issue yet not unveiled by the major commercial providers, as it includes challenges like fine-grained programmable elasticity and efficient change notification to millions of devices. In this paper, we propose a novel architecture for file synchronization which aims to solve the above two major challenges. At the heart of our proposal lies ObjectMQ, a lightweight framework for providing programmatic elasticity to distributed objects using messaging. The efficient use of indirect communication: i) enables programmatic elasticity based on queue message processing, ii) simplifies change notifications offering simple unicast and multicast primitives; and iii) provides transparent load balancing based on queues.\n Our reference implementation is StackSync, an open source elastic file synchronization Cloud service developed in the context of the FP7 project CloudSpaces. StackSync supports both predictive and reactive provisioning policies on top of ObjectMQ that adapt to real traces from the Ubuntu One service. The feasibility of our approach has been extensively validated with an open benchmark, including commercial synchronization services like Dropbox or OneDrive.", "title": "" }, { "docid": "ad9fd6e57616a0abc5377dcf6e80d6ec", "text": "Recent research has provided evidence that software developers experience a wide range of emotions. We argue that among those emotions anger deserves special attention as it can serve as an onset for tools supporting collaborative softwaredevelopment. This, however, requires a fine-grained model of the anger emotion, able to distinguish between anger directed towards self, others, and objects. Detecting anger towards self could be useful to support developers experiencing difficulties, detection of anger towards others might be helpful for community management, detecting anger towards objects might be helpful to recommend and prioritize improvements. As a first step towards automatic identification of anger direction, we built a classifier for anger direction, based on a manually annotated gold standard of 723 sentences that were obtained by mining comments in Apache issue reports.", "title": "" }, { "docid": "9d555906ea3ea9fb3a03c735db62e3b2", "text": "\"Electronic-sport\" (E-Sport) is now established as a new entertainment genre. More and more players enjoy streaming their games, which attract even more viewers. In fact, in a recent social study, casual players were found to prefer watching professional gamers rather than playing the game themselves. Within this context, advertising provides a significant source of revenue to the professional players, the casters (displaying other people's games) and the game streaming platforms. For this paper, we crawled, during more than 100 days, the most popular among such specialized platforms: Twitch.tv. Thanks to these gigabytes of data, we propose a first characterization of a new Web community, and we show, among other results, that the number of viewers of a streaming session evolves in a predictable way, that audience peaks of a game are explainable and that a Condorcet method can be used to sensibly rank the streamers by popularity. Last but not least, we hope that this paper will bring to light the study of E-Sport and its growing community. They indeed deserve the attention of industrial partners (for the large amount of money involved) and researchers (for interesting problems in social network dynamics, personalized recommendation, sentiment analysis, etc.).", "title": "" }, { "docid": "e45e49fb299659e2e71f5c4eb825aff6", "text": "We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledgebase. Knowledge is transferred by learning reusable skills to solve tasks in Minecraft, a popular video game which is an unsolved and high-dimensional lifelong learning problem. These reusable skills, which we refer to as Deep Skill Networks, are then incorporated into our novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture using two techniques: (1) a deep skill array and (2) skill distillation, our novel variation of policy distillation (Rusu et al. 2015) for learning skills. Skill distillation enables the HDRLN to efficiently retain knowledge and therefore scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network. The H-DRLN exhibits superior performance and lower learning sample complexity compared to the regular Deep Q Network (Mnih et al. 2015) in sub-domains of Minecraft.", "title": "" }, { "docid": "96bc6ffcc299e7b2221dbb8e2c4349dd", "text": "At millimeter wave (mmW) frequencies, beamforming and large antenna arrays are an essential requirement to combat the high path loss for mmW communication. Moreover, at these frequencies, very large bandwidths are available t o fulfill the data rate requirements of future wireless networks. However, utilization of these large bandwidths and of large antenna a rrays can result in a high power consumption which is an even bigger concern for mmW receiver design. In a mmW receiver, the analog-to-digital converter (ADC) is generally considered as the most power consuming block. In this paper, primarily focusing on the ADC power, we analyze and compare the total power consumption of the complete analog chain for Analog, Digita l and Hybrid beamforming (ABF, DBF and HBF) based receiver design. We show how power consumption of these beamforming schemes varies with a change in the number of antennas, the number of ADC bits (b) and the bandwidth (B). Moreover, we compare low power (as in [1]) and high power (as in [2]) ADC models, and show that for a certain range of number of antenna s, b and B, DBF may actually have a comparable and lower power consumption than ABF and HBF, respectively. In addition, we also show how the choice of an appropriate beamforming schem e depends on the signal-to-noise ratio regime.", "title": "" }, { "docid": "a11d7186eb2c04477d4355cf8f91b4f2", "text": "This study reports the results of a meta-analysis of empirical studies on Internet addiction published in academic journals for the period 1996-2006. The analysis showed that previous studies have utilized inconsistent criteria to define Internet addicts, applied recruiting methods that may cause serious sampling bias, and examined data using primarily exploratory rather than confirmatory data analysis techniques to investigate the degree of association rather than causal relationships among variables. Recommendations are provided on how researchers can strengthen this growing field of research.", "title": "" }, { "docid": "34c1910dbd746368671b2b795114edfe", "text": "Article history: Received: 4.7.2015. Received in revised form: 9.1.2016. Accepted: 29.1.2016. This paper presents a design of a distributed switched reluctance motor for an integrated motorfan system. Unlike a conventional compact motor structure, the rotor is distributed into the ends of the impeller blades. This distributed structure of motor makes more space for airflow to pass through so that the system efficiency is highly improved. Simultaneously, the distributed structure gives the motor a higher torque, better efficiency and heat dissipation. The paper first gives an initial design of a switched reluctance motor based on system structure constraints and output equations, then it predicts the machine performance and determines phase current and winding turns based on equivalent magnetic circuit analysis; finally it validates and refines the analytical design with 3D transient finite element analysis. It is found that the analytical performance prediction agrees well with finite element analysis results except for the weakness on core losses estimation. The results of the design shows that the distributed switched reluctance motor can produce a large torque of pretty high efficiency at specified speeds.", "title": "" }, { "docid": "84f496674fa8c3436f06d4663de3da84", "text": "The growth of E-Banking has led to an ease of access and 24-hour banking facility for one and all. However, this has led to a rise in e-banking fraud which is a growing problem affecting users around the world. As card is becoming the most prevailing mode of payment for online as well as regular purchase, fraud related with it is also increasing. The drastic upsurge of online banking fraud can be seen as an integrative misuse of social, cyber and physical resources [1]. Thus, the proposed system uses cryptography and steganography technology along with various data mining techniques in order to effectively secure the e-banking process and prevent online fraud.", "title": "" }, { "docid": "83e897a37aca4c349b4a910c9c0787f4", "text": "Computational imaging methods that can exploit multiple modalities have the potential to enhance the capabilities of traditional sensing systems. In this paper, we propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation (TV) regularization for high-quality multimodal imaging. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications. We illustrate the benefit of our approach in the context of joint intensity-depth imaging.", "title": "" }, { "docid": "fbd00a26883954ba0ef290efdc777e9e", "text": "A century of revolutionary growth in aviation has made global travel a reality of daily life. Aircraft and air transport overcame a number of formidable challenges and hostilities in the physical world. Success in this arduous pursuit was not without leveraging advances of the “cyber” layer, i.e., digital computing, data storage and networking, and software, in hardware, infrastructures, humans, and processes, within the airframe, in space, and on the ground. The physical world, however, is evolving continuously in the 21st century, contributing traffic growth and diversity, fossil fuel and ozone layer depletion, demographics and economy dynamics, as some major factors in aviation performance equations. In the next 100 years, apart from breakthrough physical advances, such as aircraft structural and electrical designs, we envision aviation's progress will depend on conquering cyberspace challenges and adversities, while safely and securely transitioning cyber benefits to the physical world. A tight integration of cyberspace with the physical world streamlines this vision. This paper proposes a novel cyber-physical system (CPS) framework to understand the cyber layer and cyber-physical interactions in aviation, study their impacts, and identify valuable research directions. This paper presents CPS challenges and solutions for aircraft, aviation users, airports, and air traffic management.", "title": "" }, { "docid": "947fdb3233e57b5df8ce92df31f2a0be", "text": "Recent work by Cohen et al. [1] has achieved state-of-the-art results for learning spherical images in a rotation invariant way by using ideas from group representation theory and noncommutative harmonic analysis. In this paper we propose a generalization of this work that generally exhibits improved performace, but from an implementation point of view is actually simpler. An unusual feature of the proposed architecture is that it uses the Clebsch–Gordan transform as its only source of nonlinearity, thus avoiding repeated forward and backward Fourier transforms. The underlying ideas of the paper generalize to constructing neural networks that are invariant to the action of other compact groups.", "title": "" } ]
scidocsrr
cbc396b5b6b6e10e474504cd070ce1a7
ERRoR ANAlysIs AND RETENTIoN-AwARE ERRoR MANAgEMENT FoR NAND FlAsh MEMoRy
[ { "docid": "73284fdf9bc025672d3b97ca5651084a", "text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. Understanding, characterizing, and modeling the distribution of the threshold voltages across different cells in a modern multi-level cell (MLC) flash memory can enable the design of more effective and efficient error correction mechanisms to combat this degradation. We show the first published experimental measurement-based characterization of the threshold voltage distribution of flash memory. To accomplish this, we develop a testing infrastructure that uses the read retry feature present in some 2Y-nm (i.e., 20-24nm) flash chips. We devise a model of the threshold voltage distributions taking into account program/erase (P/E) cycle effects, analyze the noise in the distributions, and evaluate the accuracy of our model. A key result is that the threshold voltage distribution can be modeled, with more than 95% accuracy, as a Gaussian distribution with additive white noise, which shifts to the right and widens as P/E cycles increase. The novel characterization and models provided in this paper can enable the design of more effective error tolerance mechanisms for future flash memories.", "title": "" }, { "docid": "3763da6b72ee0a010f3803a901c9eeb2", "text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.", "title": "" } ]
[ { "docid": "5a7ca2cab0162e49809723e75f9bdef5", "text": "Gene expression is inherently stochastic; precise gene regulation by transcription factors is important for cell-fate determination. Many transcription factors regulate their own expression, suggesting that autoregulation counters intrinsic stochasticity in gene expression. Using a new strategy, cotranslational activation by cleavage (CoTrAC), we probed the stochastic expression dynamics of cI, which encodes the bacteriophage λ repressor CI, a fate-determining transcription factor. CI concentration fluctuations influence both lysogenic stability and induction of bacteriophage λ. We found that the intrinsic stochasticity in cI expression was largely determined by CI expression level irrespective of autoregulation. Furthermore, extrinsic, cell-to-cell variation was primarily responsible for CI concentration fluctuations, and negative autoregulation minimized CI concentration heterogeneity by counteracting extrinsic noise and introducing memory. This quantitative study of transcription factor expression dynamics sheds light on the mechanisms cells use to control noise in gene regulatory networks.", "title": "" }, { "docid": "e0c14955c3a96291b566a43c8b8d97ce", "text": "In this chapter, an introduction to intelligent machine is presented. An explanation on intelligent behavior, and the difference between intelligent and repetitive natural or programmed behavior is provided. Some learning techniques in the field of Artificial Intelligence in constructing intelligent machines are then discussed. In addition, applications of intelligent machines to a number of areas including aerial navigation, ocean and space exploration, and humanoid robots are presented.", "title": "" }, { "docid": "0418d5ce9f15a91aeaacd65c683f529d", "text": "We propose a novel cancelable biometric approach, known as PalmHashing, to solve the non-revocable biometric proposed method hashes palmprint templates with a set of pseudo-random keys to obtain a unique code called palmhash. The palmhash code can be stored in portable devices such tokens and smartcards for verification. Multiple sets of palmha can be maintained in multiple applications. Thus the privacy and security of the applications can be greatly enhance compromised, revocation can also be achieved via direct replacement of a new set of palmhash code. In addition, PalmHashin offers several advantages over contemporary biometric approaches such as clear separation of the genuine-imposter and zero EER occurrences. In this paper, we outline the implementation details of this method and also highlight its p in security-critical applications.  2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "912ac193dae47f89a4400988a7e498a4", "text": "8634 Background: The solid content of juice made from wheat grass is 70% chlorophyll. Chlorophyll is often referred to as \"The blood of plant life\" and has almost the same chemical structure as haemoglobin. Chlorophyll cleanses the blood by improving the supply of oxygen to the circulatory system. Wheat grass is also a complete protein with about 30 enzymes, vitamins & minerals. Wheat grass juice has been proven over many years to benefit people in numerous ways, building the blood, restoring balance in the body, removing toxic metals from the cells, nourishing the liver & kidneys and restoring vitality. The aim of our study was to see the effect of wheat grass juice in terminally ill cancer patients to improve the quality of life.\n\n\nMETHODS\nDuring period from January 2003 to December 2005 we selected 400 solid organ cancer patients in our palliative care unit of Netaji Subhash Chandra Bose Cancer Research Institute to see the effect of wheat grass on improvement of haemoglobin level, serum protein & performance status on terminally ill cancer patients. The age range of the patients was 22 year to 87 year (median age 42 years). The different types of cancers were Lung (25%), Breast (20%), Oesophagaus (11%), Colon (9%), Ovary (8%), Hepatocellular carcinoma (6%), Stomach (6%) and others (15%) respectively. We cultivated wheat grass in our campus. When the grasses were 5 days old we took the fresh leaves including roots and made fresh juice out of that and had given 30ml of juice to all our 400 cancer patients for continuous 6 months.\n\n\nRESULT\nThe mean levels of haemoglobin, Serum total protein, albumin and performance status were 8gm%, 5.4gm%, 2.2gm% and 50%. Fifty patients required transfusion support & those patients were excluded from the study. Other 348 patients are evaluated 6 months after giving wheat grass juice. The mean values for haemoglobin, total protein & albumin were improved significantly (pvalue < .005) and were observed mean of 9.6gm%, 7.4gm% and 3.1gm%. White blood cell & platelet count were same in both the cases. The performance status was improved from 50% to 70% (Karnofsky) after wheat grass treatment.\n\n\nCONCLUSION\nWe concluded that wheat grass juice is an effective alternative of blood transfusion. Its use in terminally ill cancer patients should be encouraged. No significant financial relationships to disclose.", "title": "" }, { "docid": "d1f3961959f11ce553237ef8941da86a", "text": "Inspired by recent successes of deep learning in computer vision and speech recognition, we propose a novel framework to encode time series data as different types of images, namely, Gramian Angular Fields (GAF) and Markov Transition Fields (MTF). This enables the use of techniques from computer vision for classification. Using a polar coordinate system, GAF images are represented as a Gramian matrix where each element is the trigonometric sum (i.e., superposition of directions) between different time intervals. MTF images represent the first order Markov transition probability along one dimension and temporal dependency along the other. We used Tiled Convolutional Neural Networks (tiled CNNs) on 12 standard datasets to learn high-level features from individual GAF, MTF, and GAF-MTF images that resulted from combining GAF and MTF representations into a single image. The classification results of our approach are competitive with five stateof-the-art approaches. An analysis of the features and weights learned via tiled CNNs explains why the approach works.", "title": "" }, { "docid": "04b0f8be4eaa99aa6ee5cc6b7c6ad662", "text": "Systems designed with measurement and attestation in mind are often layered, with the lower layers measuring the layers above them. Attestations of such systems, which we call layered attestations, must bundle together the results of a diverse set of application-specific measurements of various parts of the system. Some methods of layered at-testation are more trustworthy than others, so it is important for system designers to understand the trust consequences of different system configurations. This paper presents a formal framework for reasoning about layered attestations, and provides generic reusable principles for achieving trustworthy results.", "title": "" }, { "docid": "d274cc94b4840fe2ca61657ca4d39fa7", "text": "PURPOSE OF REVIEW\nTo discuss a component of the pathogenic mechanisms of HIV infection in the context of phenotypic and functional alterations in B cells that are due to persistent viral replication leading to aberrant immune activation and cellular exhaustion. We explore how B-cell exhaustion arises during persistent viremia and how it compares with T-cell exhaustion and similar B-cell alterations in other diseases.\n\n\nRECENT FINDINGS\nHIV-associated B-cell exhaustion was first described in 2008, soon after the demonstration of persistent virus-induced T-cell exhaustion, as well as the identification of a subset B cells in tonsil tissues with immunoregulatory features similar to those observed in T-cell exhaustion. Our understanding of B-cell exhaustion has since expanded in two important areas: the role of inhibitory receptors in the unresponsiveness of exhausted B cells and the increasing evidence that similar B cells are found in other diseases that are associated with aberrant immune activation and inflammation.\n\n\nSUMMARY\nThe phenomenon of B-cell exhaustion is now well established in HIV infection and other diseases characterized by immune activation. Over the coming years, it will be important to understand how cellular exhaustion affects the capacity of the immune system to respond to persisting primary pathogens, as well as to other microbial antigens, whether encountered as secondary infections or following immunization.", "title": "" }, { "docid": "dc38da2f5c87660b218af05f8c8521ed", "text": "In this fMRI study, we investigated the convergence of underlying neural networks in thinking about a scenario involving one's own intentional action and its consequences and setting up and holding in mind an intention to act. A factorial design was employed comprising two factors: i. Causality (intentional or physical events) and ii. Prospective Memory (present or absent). In each condition, subjects answered questions about various hypothetical scenarios, which related either to the link between the subject's own intentions and consequential actions (Intentional Causality) or to the link between a natural, physical event and its consequences (Physical Causality). A prospective memory task was embedded in half the blocks. In this task, subjects were required to keep in mind an intention (to press a key on seeing a red stimulus background) whilst carrying out the ongoing Causality task. Answering questions about intentional causality versus physical causality activated a network of regions that have traditionally been associated with Theory of Mind, including the medial prefrontal cortex (mPFC), the superior temporal sulcus and the temporal poles bilaterally. In addition, the precuneus bordering with posterior cingulate cortex, an area involved in self-awareness and self-related processing, was activated more when thinking about intentional causality. In the prospective memory task, activations were found in the right parietal cortex, frontopolar cortex (BA 10) and precuneus. Different subregions within the precuneus/posterior cingulate cortex were activated in both main effects of intentional causality and prospective memory. Therefore, the precuneus/posterior cingulate cortex subserves separately thinking about one's own intentions and consequent actions and bearing in mind an intention to make an action. Previous studies have shown that prospective memory, requiring the formation of an intention and the execution of a corresponding action, is associated with decreased activation in the dorsal mPFC, close to the region activated in Theory of Mind tasks. Here, we found that holding in mind an intention to act and at the same time thinking about an intentional action led to reduced activity in a dorsal section of the mPFC. This was a different region from a more anterior, inferior dorsal mPFC region that responded to intentional causality. This suggests that different regions of mPFC play different roles in thinking about intentions.", "title": "" }, { "docid": "36e6b7bfa7043cfc97b189dc652a3461", "text": "We propose CiteTextRank, a fully unsupervised graph-based algorithm that incorporates evidence from multiple sources (citation contexts as well as document content) in a flexible manner to extract keyphrases. General steps for algorithms for unsupervised keyphrase extraction: 1. Extract candidate words or lexical units from the textual content of the target document by applying stopword and parts-of-speech filters. 2. Score candidate words based on some criterion.", "title": "" }, { "docid": "f383934a6b4b5971158e001b41f1f2ac", "text": "A survey of mental health problems of university students was carried out on 1850 participants in the age range 19-26 years. An indigenous Student Problem Checklist (SPCL) developed by Mahmood & Saleem, (2011), 45 items is a rating scale, designed to determine the prevalence rate of mental health problem among university students. This scale relates to four dimensions of mental health problems as reported by university students, such as: Sense of Being Dysfunctional, Loss of Confidence, Lack of self Regulation and Anxiety Proneness. For interpretation of the overall SPCL score, the authors suggest that scores falling above one SD should be considered as indicative of severe problems, where as score about 2 SD represent very severe problems. Our finding show that 31% of the participants fall in the “severe” category, whereas 16% fall in the “very severe” category. As far as the individual dimensions are concerned, 17% respondents comprising sample of the present study fall in very severe category Sense of Being Dysfunctional, followed by Loss of Confidence (16%), Lack of Self Regulation (14%) and Anxiety Proneness (12%). These findings are in lying with similar other studies on mental health of students. The role of variables like sample characteristics, the measure used, cultural and contextual factors are discussed in determining rates as well as their implications for student counseling service in prevention and intervention.", "title": "" }, { "docid": "677e141690f1e40317bedfe754448b26", "text": "Nowadays, secure data access control has become one of the major concerns in a cloud storage system. As a logical combination of attribute-based encryption and attribute-based signature, attribute-based signcryption (ABSC) can provide confidentiality and an anonymous authentication for sensitive data and is more efficient than traditional “encrypt-then-sign” or “sign-then-encrypt” strategies. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention in recent years. However, in many previous ABSC schemes, user’s sensitive attributes can be disclosed to the authority, and only a single authority that is responsible for attribute management and key generation exists in the system. In this paper, we propose PMDAC-ABSC, a novel privacy-preserving data access control scheme based on Ciphertext-Policy ABSC, to provide a fine-grained control measure and attribute privacy protection simultaneously in a multi-authority cloud storage system. The attributes of both the signcryptor and the designcryptor can be protected to be known by the authorities and cloud server. Furthermore, the decryption overhead for user is significantly reduced by outsourcing the undesirable bilinear pairing operations to the cloud server without degrading the attribute privacy. The proposed scheme is proven to be secure in the standard model and has the ability to provide confidentiality, unforgeability, anonymous authentication, and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation.", "title": "" }, { "docid": "a429888416cd5c175f3fb2ac90350a06", "text": "Recent years, Software Defined Routers (SDRs) (programmable routers) have emerged as a viable solution to provide a cost-effective packet processing platform with easy extensibility and programmability. Multi-core platforms significantly promote SDRs’ parallel computing capacities, enabling them to adopt artificial intelligent techniques, i.e., deep learning, to manage routing paths. In this paper, we explore new opportunities in packet processing with deep learning to inexpensively shift the computing needs from rule-based route computation to deep learning based route estimation for high-throughput packet processing. Even though deep learning techniques have been extensively exploited in various computing areas, researchers have, to date, not been able to effectively utilize deep learning based route computation for high-speed core networks. We envision a supervised deep learning system to construct the routing tables and show how the proposed method can be integrated with programmable routers using both Central Processing Units (CPUs) and Graphics Processing Units (GPUs). We demonstrate how our uniquely characterized input and output traffic patterns can enhance the route computation of the deep learning based SDRs through both analysis and extensive computer simulations. In particular, the simulation results demonstrate that our proposal outperforms the benchmark method in terms of delay, throughput, and signaling overhead.", "title": "" }, { "docid": "aca908bd224ea81feaa8301aa6f9d86b", "text": "Video tampering is a process of malicious alteration of video content, so as to conceal an object, an event or change the meaning conveyed by the imagery in the video. Fast proliferation of video acquisition devices and powerful video editing software tools have made video tampering an easy task. Hence, the authentication of video files (especially in surveillance applications like bank ATM videos, medical field and legal concerns) are becoming important. Video tampering detection aims to find the traces of tampering and thereby evaluate the authenticity and integrity of the video file. These methods can be classified into active and passive (blind) methods. In this paper, we present a survey on passive video tampering detection methods. Passive video tampering detection methods are classified into the following three categories based on the type of forgery they address: Detection of double or multiple compressed videos, Region tampering detection and Video inter-frame forgery detection. At first, we briefly present the preliminaries of video files required for understanding video tampering forgery. The existing papers surveyed are presented concisely; the features used and their limitations are summarized in a compact tabular form. Finally, we have identified some open issues that help to identify new research areas in passive video tampering detection.", "title": "" }, { "docid": "a5e5d61df71f7f27bc02473020d20009", "text": "Mowat-Wilson syndrome (MWS, MIM #235730) is a rare genetic disorder characterized by moderate-tosevere mental retardation, a recognizable facial gestalt and multiple congenital anomalies. The striking facial phenotype in addition to other features such as microcephaly, congenital heart defects, Hirschsprung disease (HSCR), severely delayed motor/speech development, seizures, short stature, corpus callosum agenesis and hypospadias are particularly important clues for the initial clinical diagnosis. All molecularly confirmed cases with typical MWS have a heterozygous loss-of-function mutation in the ZEB2 (zinc finger E-box binding homeobox 2) gene, suggesting that haploinsufficiency of the protein is the main pathological mechanism. Here, we report the first individual with MWS in mainland China confirmed by molecular genetic testing. A 1-day-old girl was referred to the department of surgery for abdominal distension and failure to pass meconium. Targeted exome sequencing revealed a de novo heterozygous nonsense mutation (p.Arg302X) in ZEB2 in the patient. Medical record review revealed mild facial gestalt, HSCR and severe congenital heart defects supporting the diagnosis of MWS. We concluded that facial dysmorphism in newborn babies might be atypical; doctors should pay more attention during physical examination and be aware of MWS if multiple congenital defects were discovered. ZEB2 gene mutation screening would be an effective manner to clarify the diagnosis.", "title": "" }, { "docid": "c19bc89db255ecf88bc1514d8bd7d018", "text": "Fulfilling the requirements of point-of-care testing (POCT) training regarding proper execution of measurements and compliance with internal and external quality control specifications is a great challenge. Our aim was to compare the values of the highly critical parameter hemoglobin (Hb) determined with POCT devices and central laboratory analyzer in the highly vulnerable setting of an emergency department in a supra maximal care hospital to assess the quality of POCT performance. In 2548 patients, Hb measurements using POCT devices (POCT-Hb) were compared with Hb measurements performed at the central laboratory (Hb-ZL). Additionally, sub collectives (WHO anemia classification, patients with Hb <8 g/dl and suprageriatric patients (age >85y.) were analyzed. Overall, the correlation between POCT-Hb and Hb-ZL was highly significant (r = 0.96, p<0.001). Mean difference was -0.44g/dl. POCT-Hb values tended to be higher than Hb-ZL values (t(2547) = 36.1, p<0.001). Standard deviation of the differences was 0.62 g/dl. Only in 26 patients (1%), absolute differences >2.5g/dl occurred. McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition for male, female and total patients (♂ p<0.001; ♀ p<0.001, total p<0.001). Hb-ZL resulted significantly more often in anemia diagnosis. In samples with Hb<8g/dl, McNemar´s test yielded no significant difference (p = 0.169). In suprageriatric patients, McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition in male, female and total patients (♂ p<0.01; ♀ p = 0.002, total p<0.001). The difference between Hb-ZL and POCT-Hb with Hb<8g/dl was not statistically significant (<8g/dl, p = 1.000). Overall, we found a highly significant correlation between the analyzed hemoglobin concentration measurement methods, i.e. POCT devices and at the central laboratory. The results confirm the successful implementation of the presented POCT concept. Nevertheless some limitations could be identified in anemic patients stressing the importance of carefully examining clinically implausible results.", "title": "" }, { "docid": "2a07f55f994ed5f7ce689b2ee0fc9616", "text": "We describe an image-based non-photorealistic rendering pipeline for creating portraits in two styles: The first is a somewhat \"puppet\" like rendering, that treats the face like a relatively uniform smooth surface, with the geometry being emphasised by shading. The second style is inspired by the artist Julian Opie, in which the human face is reduced to its essentials, i.e. homogeneous skin, thick black lines, and facial features such as eyes and the nose represented in a cartoon manner. Our method is able to automatically generate these stylisations without requiring the input images to be tightly cropped, direct frontal view, and moreover perform abstraction while maintaining the distinctiveness of the portraits (i.e. they should remain recognisable).", "title": "" }, { "docid": "6d55978aa80f177f6a859a55380ffed8", "text": "This paper investigates the effect of lowering the supply and threshold voltages on the energy efficiency of CMOS circuits. Using a first-order model of the energy and delay of a CMOS circuit, we show that lowering the supply and threshold voltage is generally advantageous, especially when the transistors are velocity saturated and the nodes have a high activity factor. In fact, for modern submicron technologies, this simple analysis suggests optimal energy efficiency at supply voltages under 0.5 V. Other process and circuit parameters have almost no effect on this optimal operating point. If there is some uncertainty in the value of the threshold or supply voltage, however, the power advantage of this very low voltage operation diminishes. Therefore, unless active feedback is used to control the uncertainty, in the future the supply and threshold voltage will not decrease drastically, but rather will continue to scale down to maintain constant electric fields.", "title": "" }, { "docid": "d83e90a88f3a59ed09b01112131ded2b", "text": "Purpose. Sentiment analysis and emotion processing are attracting increasing interest in many fields. Computer and information scientists are developing automated methods for sentiment analysis of online text. Most of the research have focused on identifying sentiment polarity or orientation—whether a document, usually product or movie review, carries a positive or negative sentiment. It is time for researchers to address more sophisticated kinds of sentiment analysis. This paper evaluates a particular linguistic framework called appraisal theory for adoption in manual as well as automatic sentiment analysis of news text. Methodology. The appraisal theory is applied to the analysis of a sample of political news articles reporting on Iraq and economic policies of George W. Bush and Mahmoud Ahmadinejad to assess its utility and to identify challenges in adopting this framework. Findings. The framework was useful in uncovering various aspects of sentiment that should be useful to researchers such as the appraisers and object of appraisal, bias of the appraisers and the author, type of attitude and manner of expressing the sentiment. Problems encountered include difficulty in identifying appraisal phrases and attitude categories because of the subtlety of expression in political news articles, lack of treatment of tense and timeframe, lack of a typology of emotions, and need to identify different types of behaviors (political, verbal and material actions) that reflect sentiment. Value. The study has identified future directions for research in automated sentiment analysis as well as sentiment analysis of online news text. It has also demonstrated how sentiment analysis of news text can be carried out.", "title": "" }, { "docid": "5e75a4ea83600736c601e46cb18aa2c9", "text": "This paper deals with a low-cost 24GHz Doppler radar sensor for traffic surveillance. The basic building blocks of the transmit/receive chain, namely the antennas, the balanced power amplifier (PA), the dielectric resonator oscillator (DRO), the low noise amplifier (LNA) and the down conversion diode mixer are presented underlining the key technologies and manufacturing approaches by means the required performances can be attained while keeping industrial costs extremely low.", "title": "" }, { "docid": "acbb1a68d9e0e1768fff8acc8ae42b32", "text": "The rapid increase in the number of Android malware poses great challenges to anti-malware systems, because the sheer number of malware samples overwhelms malware analysis systems. The classification of malware samples into families, such that the common features shared by malware samples in the same family can be exploited in malware detection and inspection, is a promising approach for accelerating malware analysis. Furthermore, the selection of representative malware samples in each family can drastically decrease the number of malware to be analyzed. However, the existing classification solutions are limited because of the following reasons. First, the legitimate part of the malware may misguide the classification algorithms because the majority of Android malware are constructed by inserting malicious components into popular apps. Second, the polymorphic variants of Android malware can evade detection by employing transformation attacks. In this paper, we propose a novel approach that constructs frequent subgraphs (fregraphs) to represent the common behaviors of malware samples that belong to the same family. Moreover, we propose and develop FalDroid, a novel system that automatically classifies Android malware and selects representative malware samples in accordance with fregraphs. We apply it to 8407 malware samples from 36 families. Experimental results show that FalDroid can correctly classify 94.2% of malware samples into their families using approximately 4.6 sec per app. FalDroid can also dramatically reduce the cost of malware investigation by selecting only 8.5% to 22% representative samples that exhibit the most common malicious behavior among all samples.", "title": "" } ]
scidocsrr
ea5823845521b8b31f6b4ad22f68aa0e
Capturing Reliable Fine-Grained Sentiment Associations by Crowdsourcing and Best-Worst Scaling
[ { "docid": "c8dbc63f90982e05517bbdb98ebaeeb5", "text": "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotionannotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion.", "title": "" } ]
[ { "docid": "ecda448df7b28ea5e453c179206e91a4", "text": "The cloud infrastructure provider (CIP) in a cloud computing platform must provide security and isolation guarantees to a service provider (SP), who builds the service(s) for such a platform. We identify last level cache (LLC) sharing as one of the impediments to finer grain isolation required by a service, and advocate two resource management approaches to provide performance and security isolation in the shared cloud infrastructure - cache hierarchy aware core assignment and page coloring based cache partitioning. Experimental results demonstrate that these approaches are effective in isolating cache interference impacts a VM may have on another VM. We also incorporate these approaches in the resource management (RM) framework of our example cloud infrastructure, which enables the deployment of VMs with isolation enhanced SLAs.", "title": "" }, { "docid": "ab7d6a9c9c07ee1a60f01d4017a3a25b", "text": "[Context and Motivation] Many a tool for finding ambiguities in natural language (NL) requirements specifications (RSs) is based on a parser and a parts-of-speech identifier, which are inherently imperfect on real NL text. Therefore, any such tool inherently has less than 100% recall. Consequently, running such a tool on a NL RS for a highly critical system does not eliminate the need for a complete manual search for ambiguity in the RS. [Question/Problem] Can an ambiguity-finding tool (AFT) be built that has 100% recall on the types of ambiguities that are in the AFT’s scope such that a manual search in an RS for ambiguities outside the AFT’s scope is significantly easier than a manual search of the RS for all ambiguities? [Principal Ideas/Results] This paper presents the design of a prototype AFT, SREE (Systemized Requirements Engineering Environment), whose goal is achieving a 100% recall rate for the ambiguities in its scope, even at the cost of a precision rate of less than 100%. The ambiguities that SREE searches for by lexical analysis are the ones whose keyword indicators are found in SREE’s ambiguity-indicator corpus that was constructed based on studies of several industrial strength RSs. SREE was run on two of these industrial strength RSs, and the time to do a completely manual search of these RSs is compared to the time to reject the false positives in SREE’s output plus the time to do a manual search of these RSs for only ambiguities not in SREE’s scope. [Contribution] SREE does not achieve its goals. However, the time comparison shows that the approach to divide ambiguity finding between an AFT with 100% recall for some types of ambiguity and a manual search for only the other types of ambiguity is promising enough to justify more work to improve the implementation of the approach. Some specific improvement suggestions are offered.", "title": "" }, { "docid": "fbd95124640b54a594f29871df4d5a5c", "text": "Gradable adjectives denote a function that takes an object and returns a measure of the degree to which the object possesses some gradable property [Kennedy, C. (1999). Projecting the adjective: The syntax and semantics of gradability and comparison. New York: Garland]. Scales, ordered sets of degrees, have begun to be studied systematically in semantics [Kennedy, C. (to appear). Vagueness and grammar: the semantics of relative and absolute gradable predicates. Linguistics and Philosophy; Kennedy, C. and McNally, L. (2005). Scale structure, degree modification, and the semantics of gradable predicates. Language, 81, 345-381; Rotstein, C., and Winter, Y. (2004). Total adjectives vs. partial adjectives: scale structure and higher order modifiers. Natural Language Semantics, 12, 259-288.]. We report four experiments designed to investigate the processing of absolute adjectives with a maximum standard (e.g., clean) and their minimum standard antonyms (dirty). The central hypothesis is that the denotation of an absolute adjective introduces a 'standard value' on a scale as part of the normal comprehension of a sentence containing the adjective (the \"Obligatory Scale\" hypothesis). In line with the predictions of Kennedy and McNally (2005) and Rotstein and Winter (2004), maximum standard adjectives and minimum standard adjectives systematically differ from each other when they are combined with minimizing modifiers like slightly, as indicated by speeded acceptability judgments. An eye movement recording study shows that, as predicted by the Obligatory Scale hypothesis, the penalty due to combining slightly with a maximum standard adjective can be observed during the processing of the sentence; the penalty is not the result of some after-the-fact inferencing mechanism. Further, a type of 'quantificational variability effect' may be observed when a quantificational adverb (mostly) is combined with a minimum standard adjective in sentences like \"The dishes are mostly dirty\", which may receive either a degree interpretation (e.g., 80% dirty) or a quantity interpretation (e.g., 80% of the dishes are dirty). The quantificational variability results provide suggestive support for the Obligatory Scale hypothesis by showing that the standard of a scalar adjective influences the preferred interpretation of other constituents in the sentence.", "title": "" }, { "docid": "153f452486e2eacb9dc1cf95275dd015", "text": "This paper presents a Fuzzy Neural Network (FNN) control system for a traveling-wave ultrasonic motor (TWUSM) driven by a dual mode modulation non-resonant driving circuit. First, the motor configuration and the proposed driving circuit of a TWUSM are introduced. To drive a TWUSM effectively, a novel driving circuit, that simultaneously employs both the driving frequency and phase modulation control scheme, is proposed to provide two-phase balance voltage for a TWUSM. Since the dynamic characteristics and motor parameters of the TWUSM are highly nonlinear and time-varying, a FNN control system is therefore investigated to achieve high-precision speed control. The proposed FNN control system incorporates neuro-fuzzy control and the driving frequency and phase modulation to solve the problem of nonlinearities and variations. The proposed control system is digitally implemented by a low-cost digital signal processor based microcontroller, hence reducing the system hardware size and cost. The effectiveness of the proposed driving circuit and control system is verified with hardware experiments under the occurrence of uncertainties. In addition, the advantages of the proposed control scheme are indicated in comparison with a conventional proportional-integral control system.", "title": "" }, { "docid": "4bad310b6664a665287faa0b48cb8057", "text": "The authors have developed Souryu-I, Souryu-II and Souryu-III, connected crawler vehicles that can travel in rubble. These machines were developed for the purpose of finding survivors trapped inside collapsed buildings. However, when conducting experiments in post-disaster environments with Souryu-III, mechanical and control limitations have been identified. This led the authors to develop novel crawler units using crawler tracks strengthened with metal, and develop two improved models, called Souryu-IV composed of three double-sided crawler bodies, a joint driving unit, a blade-spring joint mechanism, and cameras and Souryu-V composed of mono-tread-crawler bodies, elastic-rod-joint mechanisms, and cameras . The authors then conducted basic motion experiments and teleoperated control experiments on off-road fields with Souryu-IV and Souryu-V. Their high performance in experiments of urban rescue operations was confirmed. However, several problems were identified during the driving experiments, and • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •", "title": "" }, { "docid": "2272325860332d5d41c02f317ab2389e", "text": "For a developing nation, deploying big data (BD) technology and introducing data science in higher education is a challenge. A pessimistic scenario is: Mis-use of data in many possible ways, waste of trained manpower, poor BD certifications from institutes, under-utilization of resources, disgruntled management staff, unhealthy competition in the market, poor integration with existing technical infrastructures. Also, the questions in the minds of students, scientists, engineers, teachers and managers deserve wider attention. Besides the stated perceptions and analyses perhaps ignoring socio-political and scientific temperaments in developing nations, the following questions arise: How did the BD phenomenon naturally occur, post technological developments in Computer and Communications Technology and how did different experts react to it? Are academicians elsewhere agreeing on the fact that BD is a new science? Granted that big data science is a new science what are its foundations as compared to conventional topics in Physics, Chemistry or Biology? Or, is it similar in an esoteric sense to astronomy or nuclear science? What are the technological and engineering implications locally and globally and how these can be advantageously used to augment business intelligence, for example? In other words, will the industry adopt the changes due to tactical advantages? How can BD success stories be faithfully carried over elsewhere? How will BD affect the Computer Science and other curricula? How will BD benefit different segments of our society on a large scale? To answer these, an appreciation of the BD as a science and as a technology is necessary. This paper presents a quick BD overview, relying on the contemporary literature; it addresses: characterizations of BD and the BD people, the background required for the students and teachers to join the BD bandwagon, the management challenges in embracing BD so that the bottomline is clear.", "title": "" }, { "docid": "a0368f63df75a18fb668fc9de6aab98f", "text": "In this research, we compare malware detection techniques based on static, dynamic, and hybrid analysis. Specifically, we train Hidden Markov Models (HMMs) on both static and dynamic feature sets and compare the resulting detection rates over a substantial number of malware families. We also consider hybrid cases, where dynamic analysis is used in the training phase, with static techniques used in the detection phase, and vice versa. In our experiments, a fully dynamic approach generally yields the best detection rates. We discuss the implications of this research for malware detection based on hybrid techniques.", "title": "" }, { "docid": "c7f38e2284ad6f1258fdfda3417a6e14", "text": "Millimeter wave (mmWave) systems must overcome heavy signal attenuation to support high-throughput wireless communication links. The small wavelength in mmWave systems enables beamforming using large antenna arrays to combat path loss with directional transmission. Beamforming with multiple data streams, known as precoding, can be used to achieve even higher performance. Both beamforming and precoding are done at baseband in traditional microwave systems. In mmWave systems, however, the high cost of mixed-signal and radio frequency chains (RF) makes operating in the passband and analog domains attractive. This hardware limitation places additional constraints on precoder design. In this paper, we consider single user beamforming and precoding in mmWave systems with large arrays. We exploit the structure of mmWave channels to formulate the precoder design problem as a sparsity constrained least squares problem. Using the principle of basis pursuit, we develop a precoding algorithm that approximates the optimal unconstrained precoder using a low dimensional basis representation that can be efficiently implemented in RF hardware. We present numerical results on the performance of the proposed algorithm and show that it allows mmWave systems to approach waterfilling capacity.", "title": "" }, { "docid": "b7dbf710a191e51dc24619b2a520cf31", "text": "This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of3D shape from a single color image.", "title": "" }, { "docid": "9de8581f04b0b52767fbae2460f5fa38", "text": "We review flows of dense cohesionless granular materials, with a special focus on the question of constitutive equations. We first discuss the existence of a dense flow regime characterized by enduring contacts. We then emphasize that dimensional analysis strongly constrains the relation between stresses and shear rates, and show that results from experiments and simulations in different configurations support a description in terms of a frictional visco-plastic constitutive law. We then discuss the successes and limitations of this empirical rheology in light of recent alternative theoretical approaches. Finally, we briefly present depth-averaged methods developed for free surface granular flows. 1 A nn u. R ev . F lu id M ec h. 2 00 8. 40 :1 -2 4. D ow nl oa de d fro m a rjo ur na ls. an nu al re vi ew s.o rg by d r y oe l f or te rre o n 03 /1 9/ 08 . F or p er so na l u se o nl y. ANRV332-FL40-01 ARI 10 November 2007 15:46", "title": "" }, { "docid": "337004b597e3407f14cb1f0c5d0dad14", "text": "Endowing a chatbot with personality or an identity is quite challenging but critical to deliver more realistic and natural conversations. In this paper, we address the issue of generating responses that are coherent to a pre-specified agent profile. We design a model consisting of three modules: a profile detector to decide whether a post should be responded using the profile and which key should be addressed, a bidirectional decoder to generate responses forward and backward starting from a selected profile value, and a position detector that predicts a word position from which decoding should start given a selected profile value. We show that general conversation data from social media can be used to generate profile-coherent responses. Manual and automatic evaluation shows that our model can deliver more coherent, natural, and diversified responses.", "title": "" }, { "docid": "3e5e9eecab5937dc1ec7ab835b045445", "text": "Kombucha is a beverage of probable Manchurian origins obtained from fermented tea by a microbial consortium composed of several bacteria and yeasts. This mixed consortium forms a powerful symbiosis capable of inhibiting the growth of potentially contaminating bacteria. The fermentation process also leads to the formation of a polymeric cellulose pellicle due to the activity of certain strains of Acetobacter sp. The tea fermentation process by the microbial consortium was able to show an increase in certain biological activities which have been already studied; however, little information is available on the characterization of its active components and their evolution during fermentation. Studies have also reported that the use of infusions from other plants may be a promising alternative.\n\n\nPRACTICAL APPLICATION\nKombucha is a traditional fermented tea whose consumption has increased in the recent years due to its multiple functional properties such as anti-inflammatory potential and antioxidant activity. The microbiological composition of this beverage is quite complex and still more research is needed in order to fully understand its behavior. This study comprises the chemical and microbiological composition of the tea and the main factors that may affect its production.", "title": "" }, { "docid": "967aae790b938ccb219ecf68965c5b02", "text": "This paper describes the control algorithms of the high speed mobile robot Kurt3D. Kurt3D drives up to 4 m/s autonomously and reliably in an unknown office environment. We present the reliable hardware, fast control cycle algorithms and a novel set value computation scheme for achieving these velocities. In addition we sketch a real-time capable laser based position tracking method that is well suited for driving with these velocities.", "title": "" }, { "docid": "8a74e4f5a03aa1bb7bbdf4dca4b0b035", "text": "A novel particle swarm optimization (PSO)-based algorithm for the traveling salesman problem (TSP) is presented. An uncertain searching strategy and a crossover eliminated technique are used to accelerate the convergence speed. Compared with the existing algorithms for solving TSP using swarm intelligence, it has been shown that the size of the solved problems could be increased by using the proposed algorithm. Another PSO-based algorithm is proposed and applied to solve the generalized traveling salesman problem by employing the generalized chromosome. Two local search techniques are used to speed up the convergence. Numerical results show the effectiveness of the proposed algorithms. © 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b2c05f820195154dbbb76ee68740b5d9", "text": "DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.", "title": "" }, { "docid": "f8a6b721f99e54db0c4c81b9713aae78", "text": "In this paper, a new bridgeless single-ended primary inductance converter power-factor-correction rectifier is introduced. The proposed circuit provides lower conduction losses with reduced components simultaneously. In conventional PFC converters (continuous-conduction-mode boost converter), a voltage loop and a current loop are required for PFC. In the proposed converter, the control circuit is simplified, and no current loop is required while the converter operates in discontinuous conduction mode. Theoretical analysis and simulation results are provided to explain circuit operation. A prototype of the proposed converter is realized, and the results are presented. The measured efficiency shows 1% improvement in comparison to conventional SEPIC rectifier.", "title": "" }, { "docid": "bb368c3cd951a6b6170b95c10c3533de", "text": "Adaptation to tempo changes in sensorimotor synchronization is hypothesized to rest on two processes, one (phase correction) being largely automatic and the other (period correction) requiring conscious awareness and attention. In this study, participants tapped their finger in synchrony with auditory sequences containing a tempo change and continued tapping after sequence termination. Their intention to adapt or not to adapt to the tempo change was manipulated through instructions, their attentional resources were varied by introducing a concurrent secondary task (mental arithmetic), and their awareness of the tempo changes was assessed through perceptual judgements. As predicted, period correction was found to be strongly dependent on all three variables, whereas phase correction depended only on intention.", "title": "" }, { "docid": "6bc3114cc800446f4d28eb47f40adc1e", "text": "We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature.", "title": "" }, { "docid": "f91e1638e4812726ccf96f410da2624b", "text": "We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent’s policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and -greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.", "title": "" }, { "docid": "32415299f9df0d776bc30b3fcc106113", "text": "The advent of affordable consumer grade RGB-D cameras has brought about a profound advancement of visual scene reconstruction methods. Both computer graphics and computer vision researchers spend significant effort to develop entirely new algorithms to capture comprehensive shape models of static and dynamic scenes with RGB-D cameras. This led to significant advances of the state of the art along several dimensions. Some methods achieve very high reconstruction detail, despite limited sensor resolution. Others even achieve real-time performance, yet possibly at lower quality. New concepts were developed to capture scenes at larger spatial and temporal extent. Other recent algorithms flank shape reconstruction with concurrent material and lighting estimation, even in general scenes and unconstrained conditions. In this state-of-the-art report, we analyze these recent developments in RGB-D scene reconstruction in detail and review essential related work. We explain, compare, and critically analyze the common underlying algorithmic concepts that enabled these recent advancements. Furthermore, we show how algorithms are designed to best exploit the benefits of RGB-D data while suppressing their often non-trivial data distortions. In addition, this report identifies and discusses important open research questions and suggests relevant directions for future work. CCS Concepts •Computing methodologies , . . ., Reconstruction; Appearance and texture representations; Motion capture;", "title": "" } ]
scidocsrr
808b62c7f0faafc0b0e327f42f1519bb
Bug Replication in Code Clones: An Empirical Study
[ { "docid": "6eb4eb9b80b73bdcd039dfc8e07c3f5a", "text": "Code duplication or copying a code fragment and then reuse by pasting with or without any modifications is a well known code smell in software maintenance. Several studies show that about 5% to 20% of a software systems can contain duplicated code, which is basically the results of copying existing code fragments and using then by pasting with or without minor modifications. One of the major shortcomings of such duplicated fragments is that if a bug is detected in a code fragment, all the other fragments similar to it should be investigated to check the possible existence of the same bug in the similar fragments. Refactoring of the duplicated code is another prime issue in software maintenance although several studies claim that refactoring of certain clones are not desirable and there is a risk of removing them. However, it is also widely agreed that clones should at least be detected. In this paper, we survey the state of the art in clone detection research. First, we describe the clone terms commonly used in the literature along with their corresponding mappings to the commonly used clone types. Second, we provide a review of the existing clone taxonomies, detection approaches and experimental evaluations of clone detection tools. Applications of clone detection research to other domains of software engineering and in the same time how other domain can assist clone detection research have also been pointed out. Finally, this paper concludes by pointing out several open problems related to clone detection research. ∗This document represents our initial findings and a further study is being carried on. Reader’s feedback is welcome at croy@cs.queensu.ca.", "title": "" } ]
[ { "docid": "086f9cbed93553ca00b2afeff1cb8508", "text": "Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. A wide spectrum of applications can benefit from the trajectory data mining. Bringing unprecedented opportunities, large-scale trajectory data also pose great challenges. In this paper, we survey various applications of trajectory data mining, e.g., path discovery, location prediction, movement behavior analysis, and so on. Furthermore, this paper reviews an extensive collection of existing trajectory data mining techniques and discusses them in a framework of trajectory data mining. This framework and the survey can be used as a guideline for designing future trajectory data mining solutions.", "title": "" }, { "docid": "fe6630363491af99b78c232087edceb1", "text": "We consider the exploration/exploitation problem in reinforcement learning. For exploitation, it is well known that the Bellman equation connects the value at any time-step to the expected value at subsequent time-steps. In this paper we consider a similar uncertainty Bellman equation (UBE), which connects the uncertainty at any time-step to the expected uncertainties at subsequent time-steps, thereby extending the potential exploratory benefit of a policy beyond individual time-steps. We prove that the unique fixed point of the UBE yields an upper bound on the variance of the posterior distribution of the Q-values induced by any policy. This bound can be much tighter than traditional count-based bonuses that compound standard deviation rather than variance. Importantly, and unlike several existing approaches to optimism, this method scales naturally to large systems with complex generalization. Substituting our UBE-exploration strategy for -greedy improves DQN performance on 51 out of 57 games in the Atari suite.", "title": "" }, { "docid": "e9021866b3a00866d3472f41d0be310c", "text": "The purpose of this paper is to outline the theory of computational complexity which has emerged as a comprehensive theory during the last decade. This theory is concerned with the quantitative aspects of computations and its central theme is the measuring of the difficulty of computing functions. The paper concentrates on the study of computational complexity measures defined for all computable functions and makes no attempt to survey the whole field exhaustively nor to present the material in historical order. Rather it presents the basic concepts, results, and techniques of computational complexity from a new point of view from which the ideas are more easily understood and fit together as a coherent whole. K E Y W O R D S A N D P H R A S E S : computational complexity, complexity axioms, complexity measures, computation speed, time-bounds, tape-bounds, speed-up, Turing machines, diagonalization, length of programs CR C A T E G O R I E S : 5.20, 5.22, 5.23, 5.24", "title": "" }, { "docid": "b43553a835a829e00b15f8f843a51c55", "text": "Much has been written on implementation of enterprise resource planning (ERP) systems in organizations of various sizes. The literature is replete with many cases studies of both successful and unsuccessful ERP implementations. However, there have been very few empirical studies that attempt to delineate the critical issues that drive successful implementation of ERP systems. Although the failure rates of ERP implementations have been publicized widely, this has not distracted companies from investing large sums of money on ERP systems. This study reports the results of an empirical research on the critical issues affecting successful ERP implementation. Through the study, eight factors were identified that attempts to explain 86% of the variances that impact ERP implementation. There was a strong correlation between successfully implementing ERP and six out of the eight factors identified. # 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fe8c27e7ef05816cc4c4e2c68eeaf2f9", "text": "Chassis cavities have recently been proposed as a new mounting position for vehicular antennas. Cavities can be concealed and potentially offer more space for antennas than shark-fin modules mounted on top of the roof. An antenna cavity for the front or rear edge of the vehicle roof is designed, manufactured and measured for 5.9 GHz. The cavity offers increased radiation in the horizontal plane and to angles below horizon, compared to cavities located in the roof center.", "title": "" }, { "docid": "a36fae7ccd3105b58a4977b5a2366ee8", "text": "As the number of big data management systems continues to grow, users increasingly seek to leverage multiple systems in the context of a single data analysis task. To efficiently support such hybrid analytics, we develop a tool called PipeGen for efficient data transfer between database management systems (DBMSs). PipeGen automatically generates data pipes between DBMSs by leveraging their functionality to transfer data via disk files using common data formats such as CSV. PipeGen creates data pipes by extending such functionality with efficient binary data transfer capabilities that avoid file system materialization, include multiple important format optimizations, and transfer data in parallel when possible. We evaluate our PipeGen prototype by generating 20 data pipes automatically between five different DBMSs. The results show that PipeGen speeds up data transfer by up to 3.8× as compared to transferring using disk files.", "title": "" }, { "docid": "2651e41af0ed03a1078197bcde20a7d3", "text": "The use of automated blood pressure (BP) monitoring is growing as it does not require much expertise and can be performed by patients several times a day at home. Oscillometry is one of the most common measurement methods used in automated BP monitors. A review of the literature shows that a large variety of oscillometric algorithms have been developed for accurate estimation of BP but these algorithms are scattered in many different publications or patents. Moreover, considering that oscillometric devices dominate the home BP monitoring market, little effort has been made to survey the underlying algorithms that are used to estimate BP. In this review, a comprehensive survey of the existing oscillometric BP estimation algorithms is presented. The survey covers a broad spectrum of algorithms including the conventional maximum amplitude and derivative oscillometry as well as the recently proposed learning algorithms, model-based algorithms, and algorithms that are based on analysis of pulse morphology and pulse transit time. The aim is to classify the diverse underlying algorithms, describe each algorithm briefly, and discuss their advantages and disadvantages. This paper will also review the artifact removal techniques in oscillometry and the current standards for the automated BP monitors.", "title": "" }, { "docid": "c08980a949896d270092d4c2f6b7afe7", "text": "The two underlying premises of automatic face recognition are uniqueness and permanence. This paper investigates the permanence property by addressing the following: Does face recognition ability of state-of-the-art systems degrade with elapsed time between enrolled and query face images? If so, what is the rate of decline w.r.t. the elapsed time? While previous studies have reported degradations in accuracy, no formal statistical analysis of large-scale longitudinal data has been conducted. We conduct such an analysis on two mugshot databases, which are the largest facial aging databases studied to date in terms of number of subjects, images per subject, and elapsed times. Mixed-effects regression models are applied to genuine similarity scores from state-of-the-art COTS face matchers to quantify the population-mean rate of change in genuine scores over time, subject-specific variability, and the influence of age, sex, race, and face image quality. Longitudinal analysis shows that despite decreasing genuine scores, 99% of subjects can still be recognized at 0.01% FAR up to approximately 6 years elapsed time, and that age, sex, and race only marginally influence these trends. The methodology presented here should be periodically repeated to determine age-invariant properties of face recognition as state-of-the-art evolves to better address facial aging.", "title": "" }, { "docid": "a39364020ec95a3d35dfe929d4a000c0", "text": "The Internet of Things (IoTs) refers to the inter-connection of billions of smart devices. The steadily increasing number of IoT devices with heterogeneous characteristics requires that future networks evolve to provide a new architecture to cope with the expected increase in data generation. Network function virtualization (NFV) provides the scale and flexibility necessary for IoT services by enabling the automated control, management and orchestration of network resources. In this paper, we present a novel NFV enabled IoT architecture targeted for a state-of-the art operating room environment. We use web services based on the representational state transfer (REST) web architecture as the IoT application's southbound interface and illustrate its applicability via two different scenarios.", "title": "" }, { "docid": "2b30506690acbae9240ef867e961bc6c", "text": "Background Breast milk can turn pink with Serratia marcescens colonization, this bacterium has been associated with several diseases and even death. It is seen most commonly in the intensive care settings. Discoloration of the breast milk can lead to premature termination of nursing. We describe two cases of pink-colored breast milk in which S. marsescens was isolated from both the expressed breast milk. Antimicrobial treatment was administered to the mothers. Return to breastfeeding was successful in both the cases. Conclusions Pink breast milk is caused by S. marsescens colonization. In such cases,early recognition and treatment before the development of infection is recommended to return to breastfeeding.", "title": "" }, { "docid": "9daa362cc15e988abdc117786b000741", "text": "The objective of this paper is to develop the hybrid neural network models for bankruptcy prediction. The proposed hybrid neural network models are (1) a MDA-assisted neural network, (2) an ID3-assisted neural network, and (3) a SOFM(self organizing feature map)-assisted neural network. Both the MDA-assisted neural network and the ID3-assisted neural network are the neural network models operating with the input variables selected by the MDA method and ID3 respectively. The SOFM-assisted neural network combines a backpropagation model (supervised learning) with a SOFM model (unsupervised learning). The performance of the hybrid neural network model is evaluated using MDA and ID3 as a benchmark. Empirical results using Korean bankruptcy data show that hybrid neural network models are very promising neural network models for bankruptcy prediction in terms of predictive accuracy and adaptability.", "title": "" }, { "docid": "a74ccbf1f9280806a3f21f7ce468a4c7", "text": "The professional norms of good journalism include in particular the following: truthfulness, objectivity, neutrality and detachment. For Public Relations these norms are at best irrelevant. The only thing that matters is success. And this success is measured in terms ofachieving specific communication aims which are \"externally defined by a client, host organization or particular groups ofstakeholders\" (Hanitzsch, 2007, p. 2). Typical aims are, e.g., to convince the public of the attractiveness of a product, of the justice of one's own political goals or also of the wrongfulness of a political opponent.", "title": "" }, { "docid": "444ce710b4c6a161ae5f801ed0ae8bec", "text": "This paper investigates a machine learning approach for temporally ordering and anchoring events in natural language texts. To address data sparseness, we used temporal reasoning as an oversampling method to dramatically expand the amount of training data, resulting in predictive accuracy on link labeling as high as 93% using a Maximum Entropy classifier on human annotated data. This method compared favorably against a series of increasingly sophisticated baselines involving expansion of rules derived from human intuitions.", "title": "" }, { "docid": "c1978e4936ed5bda4e51863dea7e93ee", "text": "In needle-based medical procedures, beveled-tip flexible needles are steered inside soft tissue with the aim of reaching pre-defined target locations. The efficiency of needle-based interventions depends on accurate control of the needle tip. This paper presents a comprehensive mechanics-based model for simulation of planar needle insertion in soft tissue. The proposed model for needle deflection is based on beam theory, works in real-time, and accepts the insertion velocity as an input that can later be used as a control command for needle steering. The model takes into account the effects of tissue deformation, needle-tissue friction, tissue cutting force, and needle bevel angle on needle deflection. Using a robot that inserts a flexible needle into a phantom tissue, various experiments are conducted to separately identify different subsets of the model parameters. The validity of the proposed model is verified by comparing the simulation results to the empirical data. The results demonstrate the accuracy of the proposed model in predicting the needle tip deflection for different insertion velocities.", "title": "" }, { "docid": "4607124e8bb0e8c4d4f3ab7a6dab9646", "text": "Data cleaning techniques usually rely on some quality rules to identify violating tuples, and then fix these violations using some repair algorithms. Oftentimes, the rules, which are related to the business logic, can only be defined on some target report generated by transformations over multiple data sources. This creates a situation where the violations detected in the report are decoupled in space and time from the actual source of errors. In addition, applying the repair on the report would need to be repeated whenever the data sources change. Finally, even if repairing the report is possible and affordable, this would be of little help towards identifying and analyzing the actual sources of errors for future prevention of violations at the target. In this paper, we propose a system to address this decoupling. The system takes quality rules defined over the output of a transformation and computes explanations of the errors seen on the output. This is performed both at the target level to describe these errors and at the source level to prescribe actions to solve them. We present scalable techniques to detect, propagate, and explain errors. We also study the effectiveness and efficiency of our techniques using the TPC-H Benchmark for different scenarios and classes of quality rules.", "title": "" }, { "docid": "83991055d207c47bc2d5af0d83bfcf9c", "text": "BACKGROUND\nThe present study aimed at investigating the role of depression and attachment styles in predicting cell phone addiction.\n\n\nMETHODS\nIn this descriptive correlational study, a sample including 100 students of Payame Noor University (PNU), Reyneh Center, Iran, in the academic year of 2013-2014 was selected using volunteer sampling. Participants were asked to complete the adult attachment inventory (AAI), Beck depression inventory-13 (BDI-13) and the cell phone overuse scale (COS).\n\n\nFINDINGS\nResults of the stepwise multiple regression analysis showed that depression and avoidant attachment style were the best predictors of students' cell phone addiction (R(2) = 0.23).\n\n\nCONCLUSION\nThe results of this study highlighted the predictive value of depression and avoidant attachment style concerning students' cell phone addiction.", "title": "" }, { "docid": "acca339b2437da35ca75aecd411c7b86", "text": "form to demonstrate potential theoretical causes for qualitatively assessed real-world phenomena? Alternatively, can they be used to create wellparameterized empirical simulations appropriate for scenario and policy analysis? How can these models be empirically parameterized, verified, and validated? What are some remaining challenges and open questions in this research area? By providing answers to these questions, we hope to offer guidance to researchers considering the utility of this new modeling approach. We also hope to spark a healthy debate among researchers as to the potential advantages, limitations, andmajor research challenges ofMAS/LUCC modeling.AsMASmodeling studies are beingundertaken by geographers in other research fields—including transportation, integrated assessment, recreation, and resource management—many of the issues raised in this articlemay be relevant for other applications as well. The remainder of this article sequentially addresses the questions outlined above. Approaches to Modeling Land-Use/Cover Change This section examines myriad LUCC modeling approaches and offers MAS as a means of complementing other techniques. We briefly discuss the strengths and weaknesses of seven broad, partly overlapping categories of models: mathematical equation-based, system dynamics, statistical, expert system, evolutionary, cellular, and hybrid. This review is not exhaustive and only serves to highlight ways in which present techniques are complemented by MAS/LUCC models that combine cellular and agent-based models. More comprehensive overviews of LUCCmodeling techniques focus on tropical deforestation (Lambin 1994; Kaimowitz and Angelsen 1998), economic models of land use (Plantinga 1999), ecological landscapes (Baker 1989), urban and regional community planning (U.S. EPA 2000), and LUCC dynamics (Briassoulis 2000; Agarwal et al. 2002; Veldkamp and Lambin 2001; Verburg et al. forthcoming). Equation-Based Models Most models are mathematical in some way, but some are especially so, in that they rely on equations that seek a static or equilibrium solution. The most common mathematical models are sets of equations based on theories of population growth and diffusion that specify cumulative LUCC over time (Sklar and Costanza 1991). More complex models, often grounded in economic theory, employ simultaneous joint equations (Kaimowitz and Angelsen 1998). One variant of such models is based on linear programming (Weinberg, Kling, and Wilen 1993; Howitt 1995), potentially linked to GIS information on land parcels (Chuvieco 1993; Longley, Higgs, and Martin 1994; Cromley and Hanink 1999). A major drawback of such models is that a numerical or analytical solution to the system of equations must be obtained, limiting the level of complexity that may practically be built into such models. Simulation models that combine mathematical equationswith other data structures are considered below.", "title": "" }, { "docid": "6a26355ef30ba95538c5c89dc07d36f3", "text": "Gamification evolved to one of the most important trends in technology and therefore gains more and more practical and scientific notice. Yet academia lacks a comprehensive overview of research, even though a review of prior, relevant literature is essential for advancing knowledge in a field. Therefore a novel classification framework for Gamification in Information Systems with the intention to provide a structured, summarized as well as organized overview was constructed to close this gap of research. A literature review on Gamification in quality outlets combined with a Grounded Theory approach served as a starting point. As a result this paper provides a foundation for current and future research to advance the knowledge on Gamification. Moreover it offers a structure for Gamification research which was not available previously. Findings from the literature review were mapped to the classification framework and analyzed. Derived from the classification framework and its outcome future research outlets were identified.", "title": "" }, { "docid": "5fe4a9e1ef0ba8b98d410e48764acfc3", "text": "We report an ethnographic study of prosocial behavior inconnection to League of Legends, one of the most popular games in the world. In this game community, the game developer, Riot Games, implemented a system that allowed players to volunteer their time to identify unacceptable player behaviors and punish players associated with these behaviors. With the prosocial goal of improving the community and promoting sportsmanship with in the competitive culture, a small portion of players worked diligently in the system with little reward. In this paper, we use interviews and analysis of forum discussions to examine how players themselves explain their participation in the system situated in the game culture of League of Legends. We show a myriad of social and technical factors that facilitated or hindered players' prosocial behavior. We discuss how our findings might provide generalizable insights for player engagement and community-building in online games.", "title": "" }, { "docid": "7c3b0a4a509d936459881f066aa1ebd2", "text": "We report the cases of 4 patients who performed daily mirror therapy for 2 wk before undergoing elective limb amputation. One patient experienced no phantom limb pain (PLP). Two patients experienced rare episodes of mild PLP without effect on their participation in physical therapy (PT) or their quality of life. One patient reported daily, brief episodes of moderate PLP without effect on his participation in PT or his stated quality of life. These results indicate that preoperative mirror therapy may improve postamputation PT compliance and decrease the incidence of PLP. Future prospective studies are needed to confirm the results of this case series.", "title": "" } ]
scidocsrr