query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
cb689657bc3e919cb3a8e98737d66df5
Statistics and Causal Inference : A Review
[ { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "d1e6378b7909a6200b35a7c7e21b2c60", "text": "This paper analyzes and simulates the Li-ion battery charging process for a solar powered battery management system. The battery is charged using a non-inverting synchronous buck-boost DC/DC power converter. The system operates in buck, buck-boost, or boost mode, according to the supply voltage conditions from the solar panels. Rapid changes in atmospheric conditions or sunlight incident angle cause supply voltage variations. This study develops an electrochemical-based equivalent circuit model for a Li-ion battery. A dynamic model for the battery charging process is then constructed based on the Li-ion battery electrochemical model and the buck-boost power converter dynamic model. The battery charging process forms a system with multiple interconnections. Characteristics, including battery charging system stability margins for each individual operating mode, are analyzed and discussed. Because of supply voltage variation, the system can switch between buck, buck-boost, and boost modes. The system is modeled as a Markov jump system to evaluate the mean square stability of the system. The MATLAB based Simulink piecewise linear electric circuit simulation tool is used to verify the battery charging model.", "title": "" }, { "docid": "236dcb6dd7e04c0600c2f0b90f94c5dd", "text": "Main call for Cloud computing is that users only utilize what they required and only pay for what they really use. Mobile Cloud Computing refers to an infrastructure where data processing and storage can happen away from mobile device. Portio research estimates that mobile subscribers worldwide will reach 6.9 billion by the end of 2013 and 8 billion by the end of 2016. Ericsson also forecasts that mobile subscriptions will reach 9 billion by 2017. Due to increasing use of mobile devices the requirement of cloud computing in mobile devices arise, which gave birth to Mobile Cloud Computing. Mobile devices do not need to have large storage capacity and powerful CPU speed. Due to storing data on cloud there is an issue of data security. Because of the risk associated with data storage many IT professionals are not showing their interest towards Mobile Cloud Computing. To ensure the correctness of users' data in the cloud, we propose an effective mechanism with salient feature of data integrity and confidentiality. This paper proposed a mechanism which uses the concept of RSA algorithm, Hash function along with several cryptography tools to provide better security to the data stored on the mobile cloud.", "title": "" }, { "docid": "2bbb1b4081d7d55c475b34a092c6de69", "text": "We enrich a curated resource of commonsense knowledge by formulating the problem as one of knowledge base completion (KBC). Most work in KBC focuses on knowledge bases like Freebase that relate entities drawn from a fixed set. However, the tuples in ConceptNet (Speer and Havasi, 2012) define relations between an unbounded set of phrases. We develop neural network models for scoring tuples on arbitrary phrases and evaluate them by their ability to distinguish true held-out tuples from false ones. We find strong performance from a bilinear model using a simple additive architecture to model phrases. We manually evaluate our trained model’s ability to assign quality scores to novel tuples, finding that it can propose tuples at the same quality level as mediumconfidence tuples from ConceptNet.", "title": "" }, { "docid": "69ad93c7b6224321d69456c23a4185ce", "text": "Modeling fashion compatibility is challenging due to its complexity and subjectivity. Existing work focuses on predicting compatibility between product images (e.g. an image containing a t-shirt and an image containing a pair of jeans). However, these approaches ignore real-world ‘scene’ images (e.g. selfies); such images are hard to deal with due to their complexity, clutter, variations in lighting and pose (etc.) but on the other hand could potentially provide key context (e.g. the user’s body type, or the season) for making more accurate recommendations. In this work, we propose a new task called ‘Complete the Look’, which seeks to recommend visually compatible products based on scene images. We design an approach to extract training data for this task, and propose a novel way to learn the scene-product compatibility from fashion or interior design images. Our approach measures compatibility both globally and locally via CNNs and attention mechanisms. Extensive experiments show that our method achieves significant performance gains over alternative systems. Human evaluation and qualitative analysis are also conducted to further understand model behavior. We hope this work could lead to useful applications which link large corpora of real-world scenes with shoppable products.", "title": "" }, { "docid": "d60cbf76534621768c1a5101abae5537", "text": "AbstrAct Achieving IT-business alignment has been a long-standing, critical, information management issue. A theoretical framework of the maturity levels of management practices and strategic IT choices that facilitate alignment was empirically tested and validated. Confirmatory factor analysis (CFA) validated 6 factors and identified 22 indices to measure strategic alignment maturity. A mixed model repeated measure analysis of variance (ANOVA) obtained significant results for both the main effect and interaction effect of differences for the 6 maturity factors across the 11 business units. Regression analysis found a positive association between overall strategic alignment maturity and respondents' self-rated maturity. These exploratory findings show promise for the assessment instrument to be used as a diagnostic tool for organizations to improve their IT-business alignment maturity levels.", "title": "" }, { "docid": "de2f36b553a1b7d53659fd5d42a051d9", "text": "In order to fit the diverse scenes in life, more and more people choose to join different types of social networks simultaneously. These different networks often contain the information that people leave in a particular scene. Under the circumstances, identifying the same person across different social networks is a crucial way to help us understand the user from multiple aspects. The current solution to this problem focuses on using only profile matching or relational matching method. Some other methods take the two aspect of information into consideration, but they associate the profile similarity with relation similarity simply by a parameter. The matching results on two dimensions may have large difference, directly link them may reduce the overall similarity. Unlike the most of the previous work, we propose to utilize collaborative training method to tackle this problem. We run experiments on two real-world social network datasets, and the experimental results confirmed the effectiveness of our method.", "title": "" }, { "docid": "7070e7355da98ae8363b0579fde99f59", "text": "We study a two-level inventory system that is subject to failures and repairs. The objective is to minimize the expected total cost so as to determine the production plan for a single quantity demand. The expected total cost consists of the inventory carrying costs for finished and unfinished items, the backlog cost for not meeting the demand due-date, and the planning costs associated with the ordering schedule of unfinished items. The production plan consists of the optimal number of lot sizes, the optimal size for each lot, the optimal ordering schedule for unfinished items, and the optimal due-date to be assigned to the demand. To gain insight, we solve special cases and use their results to device an efficient solution approach for the main model. The models are solved to optimality and the solution is either obtained in closed form or through very efficient algorithms.", "title": "" }, { "docid": "89d895248235c7395fe1f12a39ee7267", "text": "This work elucidates the solder reflow of eutectic (63Sn/37Pb) solder bump using fluxless formic acid. The dependences of formic acid reflow on metallic oxide reduction are investigated experimentally for eutectic solder bump. Appropriate temperature profile and sufficient formic acid concentration are the key factors to optimize the metallic oxide reduction during thermal reflow. A positive pressure in process chamber is beneficial to control the variations of unwanted oxygen and the regrowth of metallic oxide during mechanical wafer switching. A reflowed solder joint degrades considerably under shear strength testing after several reflow times.", "title": "" }, { "docid": "9809596697119fb50978470aaec837d6", "text": "Tuning of PID controller parameters is one of the usual tasks of the control engineers due to the wide applications of this class of controllers in industry. In this paper the Iterative Feedback Tuning (IFT) method is applied to tune the PID parameters. The main advantage of this method is that there is no need to the model of the system, so that is useful in many processes which there is no obvious model of the system. In many cases this feature can be so useful in tuning the controller parameters. The IFT is applied here to tune the PID parameters. Speed control of DC motor was employed to demonstrate the effectiveness of the method. The results is compared with other tuning methods and represented the good performance of the designed controller. As it is shown, the step response of the system controlled by PID tuned with IFT has more robustness and performs well.", "title": "" }, { "docid": "908baa7a1004a372f1e8e42f037e0501", "text": "Scientists depend on literature search to find prior work that is relevant to their research ideas. We introduce a retrieval model for literature search that incorporates a wide variety of factors important to researchers, and learns the weights of each of these factors by observing citation patterns. We introduce features like topical similarity and author behavioral patterns, and combine these with features from related work like citation count and recency of publication. We present an iterative process for learning weights for these features that alternates between retrieving articles with the current retrieval model, and updating model weights by training a supervised classifier on these articles. We propose a new task for evaluating the resulting retrieval models, where the retrieval system takes only an abstract as its input and must produce as output the list of references at the end of the abstract's article. We evaluate our model on a collection of journal, conference and workshop articles from the ACL Anthology Reference Corpus. Our model achieves a mean average precision of 28.7, a 12.8 point improvement over a term similarity baseline, and a significant improvement both over models using only features from related work and over models without our iterative learning.", "title": "" }, { "docid": "b640ed2bd02ba74ee0eb925ef6504372", "text": "In the discussion about Future Internet, Software-Defined Networking (SDN), enabled by OpenFlow, is currently seen as one of the most promising paradigm. While the availability and scalability concerns rises as a single controller could be alleviated by using replicate or distributed controllers, there lacks a flexible mechanism to allow controller load balancing. This paper proposes BalanceFlow, a controller load balancing architecture for OpenFlow networks. By utilizing CONTROLLER X action extension for OpenFlow switches and cross-controller communication, one of the controllers, called “super controller”, can flexibly tune the flow-requests handled by each controller, without introducing unacceptable propagation latencies. Experiments based on real topology show that BalanceFlow can adjust the load of each controller dynamically.", "title": "" }, { "docid": "63115b12e4a8192fdce26eb7e2f8989a", "text": "Theorems and techniques to form different types of transformationally invariant processing and to produce the same output quantitatively based on either transformationally invariant operators or symmetric operations have recently been introduced by the authors. In this study, we further propose to compose a geared rotationally identical CNN system (GRI-CNN) with a small angle increment by connecting networks of participated processes at the first flatten layer. Using an ordinary CNN structure as a base, requirements for constructing a GRI-CNN include the use of either symmetric input vector or kernels with an angle increment that can form a complete cycle as a \"gearwheel\". Four basic GRI-CNN structures were studied. Each of them can produce quantitatively identical output results when a rotation angle of the input vector is evenly divisible by the increment angle of the gear. Our study showed when a rotated input vector does not match to a gear angle, the GRI-CNN can also produce a highly consistent result. With an ultrafine increment angle (e.g., 1 or 0.1), a virtually isotropic CNN system can be constructed.", "title": "" }, { "docid": "a8c1224f291df5aeb655a2883b16bcfb", "text": "We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.", "title": "" }, { "docid": "7f2d7d45f7db57790b3633dc05f90f8d", "text": "Automatic question answering (QA), which can greatly facilitate the access to information, is an important task in artificial intelligence. Recent years have witnessed the development of QA methods based on deep learning. However, a great amount of data is needed to train deep neural networks, and it is laborious to annotate training data for factoid QA of new domains or languages. In this paper, a distantly supervised method is proposed to automatically generate QA pairs. Additional efforts are paid to let the generated questions reflect the query interests and expression styles of users by exploring the community QA. Specifically, the generated questions are selected according to the estimated probabilities they are asked. Diverse paraphrases of questions are mined from community QA data, considering that the model trained on monotonous synthetic questions is very sensitive to variants of question expressions. Experimental results show that the model solely trained on generated data via the distant supervision and mined paraphrases could answer real-world questions with the accuracy of 49.34%. When limited annotated training data is available, significant improvements could be achieved by incorporating the generated data. An improvement of 1.35 absolute points is still observed on WebQA, a dataset with large-scale annotated training samples.", "title": "" }, { "docid": "7c210ef8c6475ab33f0aeb96d044665d", "text": "This paper presents a new class of dual-mode dual-band filters in which each polarization is dedicated to a selected band. The equivalent circuit is a parallel combination of two inline networks that represent each polarization. A transmission zero is generated between the two bands by properly adjusting the relative orientations of the input and output coupling apertures. For filters where each branch contains an odd number of resonators, the input and output apertures have the same orientations. For filters where the order of each branch is even, the two apertures are orthogonal to each other. Filters using three and four dual-mode cavities are designed and presented. Different arrangements of the dual-mode cavities are also presented.", "title": "" }, { "docid": "921840f75f1270bcb148d9a74ff4db58", "text": "Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator’s internal representation, we can effectively modulate the discriminator’s accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods. (Video1)", "title": "" }, { "docid": "2a0577aa61ca1cbde207306fdb5beb08", "text": "In recent years, researchers have shown that unwanted web tracking is on the rise, as advertisers are trying to capitalize on users' online activity, using increasingly intrusive and sophisticated techniques. Among these, browser fingerprinting has received the most attention since it allows trackers to uniquely identify users despite the clearing of cookies and the use of a browser's private mode. In this paper, we investigate and quantify the fingerprintability of browser extensions, such as, AdBlock and Ghostery. We show that an extension's organic activity in a page's DOM can be used to infer its presence, and develop XHound, the first fully automated system for fingerprinting browser extensions. By applying XHound to the 10,000 most popular Google Chrome extensions, we find that a significant fraction of popular browser extensions are fingerprintable and could thus be used to supplement existing fingerprinting methods. Moreover, by surveying the installed extensions of 854 users, we discover that many users tend to install different sets of fingerprintable browser extensions and could thus be uniquely, or near-uniquely identifiable by extension-based fingerprinting. We use XHound's results to build a proof-of-concept extension-fingerprinting script and show that trackers can fingerprint tens of extensions in just a few seconds. Finally, we describe why the fingerprinting of extensions is more intrusive than the fingerprinting of other browser and system properties, and sketch two different approaches towards defending against extension-based fingerprinting.", "title": "" }, { "docid": "8aa3f60b221a4698a17e96765fd430fb", "text": "We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss-Seidel (i.e., sequential) ones, as well as virtually all possibilities “in between” with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.", "title": "" }, { "docid": "07e03419430b7ea8ca3c7b02f9340d46", "text": "Recently, [2] presented a security attack on the privacy-preserving outsourcing scheme for biometric identification proposed in [1]. In [2], the author claims that the scheme CloudBI-II proposed in [1] can be broken under the collusion case. That is, when the cloud server acts as a user to submit a number of identification requests, CloudBI-II is no longer secure. In this technical report, we will explicitly show that the attack method proposed in [2] doesn’t work in fact.", "title": "" } ]
scidocsrr
19edd2ee66e465505f2db9abef75c80d
Comparative evaluation of 15 kV SiC IGBT and 15 kV SiC MOSFET for 3-phase medium voltage high power grid connected converter applications
[ { "docid": "6b5e9fa6f81e311dcd5e8154b64a111c", "text": "Silicon Carbide (SiC) devices and modules have been developed with high blocking voltages for Medium Voltage power electronics applications. Silicon devices do not exhibit higher blocking voltage capability due to its relatively low band gap energy compared to SiC counterparts. For the first time, 12kV SiC IGBTs have been fabricated. These devices exhibit excellent switching and static characteristics. A Three-level Neutral Point Clamped Voltage Source Converter (3L-NPC VSC) has been simulated with newly developed SiC IGBTs. This 3L-NPC Converter is used as a 7.2kV grid interface for the solid state transformer and STATCOM operation. Also a comparative study is carried out with 3L-NPC VSC simulated with 10kV SiC MOSFET and 6.5kV Silicon IGBT device data.", "title": "" } ]
[ { "docid": "d3e409b074c4c26eb208b27b7b58a928", "text": "The increase in concern for carbon emission and reduction in natural resources for conventional power generation, the renewable energy based generation such as Wind, Photovoltaic (PV), and Fuel cell has gained importance. Out of which the PV based generation has gained significance due to availability of abundant sunlight. As the Solar power conversion is a low efficient conversion process, accurate and reliable, modeling of solar cell is important. Due to the non-linear nature of diode based PV model, the accurate design of PV cell is a difficult task. A built-in model of PV cell is available in Simscape, Simelectronics library, Matlab. The equivalent circuit parameters have to be computed from data sheet and incorporated into the model. However it acts as a stiff source when implemented with a MPPT controller. Henceforth, to overcome this drawback, in this paper a two-diode model of PV cell is implemented in Matlab Simulink with reduced four required parameters along with similar configuration of the built-in model. This model allows incorporation of MPPT controller. The I-V and P-V characteristics of these two models are investigated under different insolation levels. A PV based generation system feeding a DC load is designed and investigated using these two models and further implemented with MPPT based on P&O technique.", "title": "" }, { "docid": "1617f5581ff0e2ed46aa49e277431746", "text": "Direct communication between two or more devices without the intervention of a base station, known as device-to-device (D2D) communication, is a promising way to improve performance of cellular networks in terms of spectral and energy efficiency. The D2D communication paradigm has been largely exploited in non-cellular technologies such as Bluetooth or Wi-Fi but it has not yet been fully incorporated into existing cellular networks. In this regard, a new proposal focusing on the integration of D2D communication into LTE-A has been recently approved by the 3GPP standardization community as discussed in this paper. In cellular networks, D2D communication introduces several critical issues, such as interference management and decisions on whether devices should communicate directly or not. In this survey, we provide a thorough overview of the state of the art focusing on D2D communication, especially within 3GPP LTE/LTE-A. First, we provide in-depth classification of papers looking at D2D from several perspectives. Then, papers addressing all major problems and areas related to D2D are presented and approaches proposed in the papers are compared according to selected criteria. On the basis of the surveyed papers, we highlight areas not satisfactorily addressed so far and outline major challenges for future work regarding efficient integration of D2D in cellular networks.", "title": "" }, { "docid": "ad327b34d34887ae6380cbb07b7748bb", "text": "IEEE 802.15.4 is the de facto standard for Wireless Sensor Networks (WSNs) that outlines the specifications of the PHY layer and MAC sub-layer in these networks. The MAC protocol is needed to orchestrate sensor nodes access to the wireless communication medium. Although distinguished by a set of strengths that contributed to its popularity in various WSNs, IEEE 802.15.4 MAC suffers from several limitations that play a role in deteriorating its performance. Also, from a practical perspective, 80.15.4-based networks are usually deployed in the vicinity of other wireless networks that operate in the same ISM band. This means that 802.15.4 MAC should be ready to cope with interference from other networks. These facts have motivated efforts to devise improved IEEE 802.15.4 MAC protocols for WSNs. In this paper we provide a survey for these protocols and highlight the methodologies they follow to enhance the performance of the IEEE 802.15.4 MAC protocol.", "title": "" }, { "docid": "cfb08af0088de56519960beb9ee56607", "text": "Research into corpus-based semantics has focused on the development of ad hoc models that treat single tasks, or sets of closely related tasks, as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpus. As an alternative to this “one task, one model” approach, the Distributional Memory framework extracts distributional information once and for all from the corpus, in the form of a set of weighted word-link-word tuples arranged into a third-order tensor. Different matrices are then generated from the tensor, and their rows and columns constitute natural spaces to deal with different semantic problems. In this way, the same distributional information can be shared across tasks such as modeling word similarity judgments, discovering synonyms, concept categorization, predicting selectional preferences of verbs, solving analogy problems, classifying relations between word pairs, harvesting qualia structures with patterns or example pairs, predicting the typical properties of concepts, and classifying verbs into alternation classes. Extensive empirical testing in all these domains shows that a Distributional Memory implementation performs competitively against task-specific algorithms recently reported in the literature for the same tasks, and against our implementations of several state-of-the-art methods. The Distributional Memory approach is thus shown to be tenable despite the constraints imposed by its multi-purpose nature.", "title": "" }, { "docid": "67995490350c68f286029d8b401d78d8", "text": "OBJECTIVE\nModifiable risk factors for dementia were recently identified and compiled in a systematic review. The 'Lifestyle for Brain Health' (LIBRA) score, reflecting someone's potential for dementia prevention, was studied in a large longitudinal population-based sample with respect to predicting cognitive change over an observation period of up to 16 years.\n\n\nMETHODS\nLifestyle for Brain Health was calculated at baseline for 949 participants aged 50-81 years from the Maastricht Ageing Study. The predictive value of LIBRA for incident dementia and cognitive impairment was examined by using Cox proportional hazard models and by testing its relation with cognitive decline.\n\n\nRESULTS\nLifestyle for Brain Health predicted future risk of dementia, as well as risk of cognitive impairment. A one-point increase in LIBRA score related to 19% higher risk for dementia and 9% higher risk for cognitive impairment. LIBRA predicted rate of decline in processing speed, but not memory or executive functioning.\n\n\nCONCLUSIONS\nLifestyle for Brain Health (LIBRA) may help in identifying and monitoring risk status in dementia-prevention programmes, by targeting modifiable, lifestyle-related risk factors. Copyright © 2017 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "54fdab8bddb3a2f5be2fd9ef8937e5a7", "text": "This tutorial makes the case for developing a unified framework that manages information extraction from unstructured data (focusing in particular on text). We first survey research on information extraction in the database, AI, NLP, IR, and Web communities in recent years. Then we discuss why this is the right time for the database community to actively participate and address the problem of managing information extraction (including in particular the challenges of maintaining and querying the extracted information, and accounting for the imprecision and uncertainty inherent in the extraction process). Finally, we show how interested researchers can take the next step, by pointing to open problems, available datasets, applicable standards, and software tools. We do not assume prior knowledge of text management, NLP, extraction techniques, or machine learning.", "title": "" }, { "docid": "75a9715ce9eaffaa43df5470ad7cacca", "text": "Resting frontal electroencephalographic (EEG) asymmetry has been hypothesized as a marker of risk for major depressive disorder (MDD), but the extant literature is based predominately on female samples. Resting frontal asymmetry was assessed on 4 occasions within a 2-week period in 306 individuals aged 18-34 (31% male) with (n = 143) and without (n = 163) lifetime MDD as defined by the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (American Psychiatric Association, 1994). Lifetime MDD was linked to relatively less left frontal activity for both sexes using a current source density (CSD) reference, findings that were not accounted for solely by current MDD status or current depression severity, suggesting that CSD-referenced EEG asymmetry is a possible endophenotype for depression. In contrast, results for average and linked mastoid references were less consistent but demonstrated a link between less left frontal activity and current depression severity in women.", "title": "" }, { "docid": "7e8b5b1b1c7720cb4d81922dc7099a99", "text": "The anthrone reagent of Dreywood (1) has been applied to the determination of blood sugar by Durham, Bloom, Lewis, and Mandel (2), Fetz and Petrie (3), and Zipf and Waldo (4). In the procedures developed by these authors, the heat resulting from mixing sulfuric acid with water causes the reaction to take place. Greater precision is obtained by heating the mixture of anthrone, sulfuric acid, and carbohydrate for a definite time in a constant temperature bath. Scott and Melvin (5) reported that the “heat of mixing” procedure is satisfactory if accuracy no better than ~5 per cent is required. They obtained data showing a coefficient of variation of kO.48 per cent in their method, which involves heating in an ethylene glycol bath at 90” for 16 minutes. In our laboratory a method was developed for the determination of dextran in blood and urine in which a mixture of anthrone reagent and dextran solution is heated in a boiling water bath for a definite time (6). In twelve determinations by this method with dextran solution there was observed a coefficient of variation of kO.36 per cent, and in twelve determinations in which the dextran was precipitated from solution by alcohol the coefficient of variation was f0.56 per cent. Our observations with respect to the precision of the “heat of mixing” procedure, compared with heating for a definite time in a constant temperature bath, are in agreement with the work of Scott and Melvin (5). We have adapted our procedure for the determination of dextran to the estimation of the sugar in blood and spinal fluid. A stabilized anthrone reagent has been developed, and certain findings of interest are reported.", "title": "" }, { "docid": "c4bcdd191b4d04368f12c967b361a7e1", "text": "Inductive concept learning is the task of learning to assign cases to a discrete set of classes. In real-world applications of concept learning, there are many different types of cost involved. The majority of the machine learning literature ignores all types of cost (unless accuracy is interpreted as a type of cost measure). A few papers have investigated the cost of misclassification errors. Very few papers have examined the many other types of cost. In this paper, we attempt to create a taxonomy of the different types of cost that are involved in inductive concept learning. This taxonomy may help to organize the literature on cost-sensitive learning. We hope that it will inspire researchers to investigate all types of cost in inductive concept learning in more depth.", "title": "" }, { "docid": "6a2e6492695beab2c0a6d479bffd65e1", "text": "Electroencephalogram (EEG) signal based emotion recognition, as a challenging pattern recognition task, has attracted more and more attention in recent years and widely used in medical, Affective Computing and other fields. Traditional approaches often lack of the high-level features and the generalization ability is poor, which are difficult to apply to the practical application. In this paper, we proposed a novel model for multi-subject emotion classification. The basic idea is to extract the high-level features through the deep learning model and transform traditional subject-independent recognition tasks into multi-subject recognition tasks. Experiments are carried out on the DEAP dataset, and our results demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "496fdf000074eb55f9e42e356d97b4b1", "text": "Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, we experiment with incorporating richer structural distributions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard softselection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks: a linearchain conditional random field and a graph-based parsing model, and describe how these models can be practically implemented as neural network layers. Experiments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a variety of synthetic and real tasks: tree transduction, neural machine translation, question answering, and natural language inference. We further find that models trained in this way learn interesting unsupervised hidden representations that generalize simple attention.", "title": "" }, { "docid": "6b6dd935eebca1ea08e10af8afcbfbdd", "text": "CONTEXT\nThe quality of consumer health information on the World Wide Web is an important issue for medicine, but to date no systematic and comprehensive synthesis of the methods and evidence has been performed.\n\n\nOBJECTIVES\nTo establish a methodological framework on how quality on the Web is evaluated in practice, to determine the heterogeneity of the results and conclusions, and to compare the methodological rigor of these studies, to determine to what extent the conclusions depend on the methodology used, and to suggest future directions for research.\n\n\nDATA SOURCES\nWe searched MEDLINE and PREMEDLINE (1966 through September 2001), Science Citation Index (1997 through September 2001), Social Sciences Citation Index (1997 through September 2001), Arts and Humanities Citation Index (1997 through September 2001), LISA (1969 through July 2001), CINAHL (1982 through July 2001), PsychINFO (1988 through September 2001), EMBASE (1988 through June 2001), and SIGLE (1980 through June 2001). We also conducted hand searches, general Internet searches, and a personal bibliographic database search.\n\n\nSTUDY SELECTION\nWe included published and unpublished empirical studies in any language in which investigators searched the Web systematically for specific health information, evaluated the quality of Web sites or pages, and reported quantitative results. We screened 7830 citations and retrieved 170 potentially eligible full articles. A total of 79 distinct studies met the inclusion criteria, evaluating 5941 health Web sites and 1329 Web pages, and reporting 408 evaluation results for 86 different quality criteria.\n\n\nDATA EXTRACTION\nTwo reviewers independently extracted study characteristics, medical domains, search strategies used, methods and criteria of quality assessment, results (percentage of sites or pages rated as inadequate pertaining to a quality criterion), and quality and rigor of study methods and reporting.\n\n\nDATA SYNTHESIS\nMost frequently used quality criteria used include accuracy, completeness, readability, design, disclosures, and references provided. Fifty-five studies (70%) concluded that quality is a problem on the Web, 17 (22%) remained neutral, and 7 studies (9%) came to a positive conclusion. Positive studies scored significantly lower in search (P =.02) and evaluation (P =.04) methods.\n\n\nCONCLUSIONS\nDue to differences in study methods and rigor, quality criteria, study population, and topic chosen, study results and conclusions on health-related Web sites vary widely. Operational definitions of quality criteria are needed.", "title": "" }, { "docid": "ac2f179099dc727bc0c065244a66cf19", "text": "Software applications continue to grow in terms of the number of features they offer, making personalization increasingly important. Research has shown that most users prefer the control afforded by an adaptable approach to personalization rather than a system-controlled adaptive approach. No study, however, has compared the efficiency of the two approaches. In a controlled lab study with 27 subjects we compared the measured and perceived efficiency of three menu conditions: static, adaptable and adaptive. Each was implemented as a split menu, in which the top four items remained static, were adaptable by the subject, or adapted according to the subject's frequently and recently used items. The static menu was found to be significantly faster than the adaptive menu, and the adaptable menu was found to be significantly faster than the adaptive menu under certain conditions. The majority of users preferred the adaptable menu overall. Implications for interface design are discussed.", "title": "" }, { "docid": "61f32a4ec84063e70c0f2a7378790a8e", "text": "Grounded theory method (GTM) is increasingly used in HCI and CSCW research (Fig. 1 ). GTM offers a rigorous way to explore a domain, with an emphasis on discovering new insights, testing those insights, and building partial understandings into a broader theory of the domain. The strength of the method—as a full method— is the ability to make sense of diverse phenomena, to construct an account of those phenomena that is strongly based in the data (“grounded” in the data), to develop that account through an iterative and principled series of challenges and modifi cations, and to communicate the end result to others in a way that is convincing and valuable to their own research and understanding. GTM is particularly appropriate for making sense of a domain without a dominant theory. It is not concerned with testing existing theories. Rather, GTM is concerned with the creation of theory, and with the rigorous and even ruthless examination of that new theory. Grounded Theory Method is exactly that—a method , or rather, a family of methods (Babchuk, 2010 )—for the development of theory. GTM makes explicit use of the capabilities that nearly all human share, to be curious about the world, to understand the world, and to communicate that understanding to others. GTM adds to these lay human capabilities a rigorous, scientifi c set of ways of inquiring, ways of thinking, and ways of knowing that can add power and explanatory strength to HCI and CSCW research. Curiosity, Creativity, and Surprise as Analytic Tools: Grounded Theory Method", "title": "" }, { "docid": "0e68fbcd564e43df2b4e1866ab88e833", "text": "This paper considers the decision-making problem for a human-driven vehicle crossing a road intersection in the presence of other, potentially errant, drivers. Our approach relies on a novel threat assessment module, which combines an intention predictor based on support vector machines with an efficient threat assessor using rapidly-exploring random trees. This module warns the host driver with the computed threat level and the corresponding best “escape maneuver” through the intersection, if the threat is sufficiently large. Through experimental results with small autonomous and human-driven vehicles, we demonstrate that this threat assessment module can be used in real-time to minimize the risk of collision.", "title": "" }, { "docid": "c43532ec0c38136c3563568a73e8f3ce", "text": "BACKGROUND & AIMS\nThe asialoglycoprotein receptor on hepatocyte membranes recognizes the galactose residues of glycoproteins. We investigated the specificity, accuracy and threshold value of asialoglycoprotein receptor imaging for estimating liver reserve via scintigraphy using (111)In-hexavalent lactoside in mouse models.\n\n\nMETHODS\n(111)In-hexavalent lactoside scintigraphy for asialoglycoprotein receptor imaging was performed on groups of normal mice, orthotopic SK-HEP-1-bearing mice, subcutaneous HepG2-bearing mice, mice with 20-80% partial hepatectomy and mice with acute hepatitis induced by acetaminophen. Liver reserve was measured by relative liver uptake and compared with normal mice. Asialoglycoprotein receptor blockade was performed via an in vivo asialofetuin competitive binding assay.\n\n\nRESULTS\nA total of 73.64±7.11% of the injection dose accumulated in the normal liver tissue region, and radioactivity was barely detected in the hepatoma region. When asialoglycoprotein receptor was blocked using asialofetuin, less than 0.41±0.04% of the injection dose was detected as background in the liver. Asialoglycoprotein receptor imaging data revealed a linear correlation between (111)In-hexavalent lactoside binding and residual liver mass (R(2)=0.8548) in 20-80% of partially hepatectomized mice, demonstrating the accuracy of (111)In-hexavalent lactoside imaging for measuring the functional liver mass. Asialoglycoprotein receptor imaging data in mice with liver failure induced using 600mg/kg acetaminophen revealed 19-45% liver reserve relative to normal mice and a fatal threshold value of 25% liver reserve.\n\n\nCONCLUSION\nThe (111)In-hexavalent lactoside imaging method appears to be a good, specific, visual and quantitative predictor of functional liver reserve. The diagnostic threshold for survival was at 25% liver reserve in mice.", "title": "" }, { "docid": "e3446eb521ba7fa98c00662645ad0910", "text": "This article describes the antihyperglycemic activity, in vivo antioxidant potential, effect on hemoglobin glycosylation, estimation of liver glycogen content, and in vitro peripheral glucose utilization of bacosine, a triterpene isolated from the ethyl acetate fraction (EAF) of the ethanolic extract of Bacopa monnieri. Bacosine produced a significant decrease in the blood glucose level when compared with the diabetic control rats both in the single administration as well as in the multiple administration study. It was observed that the compound reversed the weight loss of the diabetic rats, returning the values to near normal. Bacosine also prevented elevation of glycosylated hemoglobin in vitro with an IC₅₀ value of 7.44 µg/mL, comparable with the one for the reference drug α-tocopherol. Administration of bacosine and glibenclamide significantly decreased the levels of malondialdehyde (MDA), and increased the levels of reduced glutathione (GSH) and the activities of superoxide dismutase (SOD) and catalase (CAT) in the liver of diabetic rats. Bacosine increased glycogen content in the liver of diabetic rats and peripheral glucose utilization in the diaphragm of diabetic rats in vitro, which is comparable with the action of insulin. Thus, bacosine might have insulin-like activity and its antihyperglycemic effect might be due to an increase in peripheral glucose consumption as well as protection against oxidative damage in alloxanized diabetes.", "title": "" }, { "docid": "eaf7b6b0cc18453538087cc90254dbd8", "text": "We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires.\n Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.", "title": "" }, { "docid": "1258939378850f7d89f6fa860be27c39", "text": "Sparse methods and the use of Winograd convolutions are two orthogonal approaches, each of which significantly accelerates convolution computations in modern CNNs. Sparse Winograd merges these two and thus has the potential to offer a combined performance benefit. Nevertheless, training convolution layers so that the resulting Winograd kernels are sparse has not hitherto been very successful. By introducing a Winograd layer in place of a standard convolution layer, we can learn and prune Winograd coefficients “natively” and obtain sparsity level beyond 90% with only 0.1% accuracy loss with AlexNet on ImageNet dataset. Furthermore, we present a sparse Winograd convolution algorithm and implementation that exploits the sparsity, achieving up to 31.7 effective TFLOP/s in 32-bit precision on a latest Intel Xeon CPU, which corresponds to a 5.4× speedup over a state-of-the-art dense convolution implementation.", "title": "" }, { "docid": "d8aecc89815b81ea7402f823c5bb80d3", "text": "When a problem is large or difficult to solve, computers are often used to find the solution. But when the problem becomes too large, traditional methods of finding the answer may not be enough. It is in turning to nature that inspiration can be found to solve these difficult problems. Artificial intelligence seeks to emulate creatures and processes found in nature, and turn their techniques for solving a problem into an algorithm. Many such metaheuristic algorithms have been developed, but there is a continuous search for better, faster algorithms. The recently developed Firefly Algorithm has been shown to outperform the longstanding Particle Swarm Optimization, and this work aims to verify those results and improve upon them by comparing the two algorithms with a large scale application. A direct hardware implementation of the Firefly Algorithm is also proposed, to speed up performance in embedded systems applications.", "title": "" } ]
scidocsrr
f3ca4fb9556ba9af5caf76dc52bec702
Multilevel Inverter Topology Survey Master of Science Thesis in Electric Power Engineering
[ { "docid": "165fbade7d495ce47a379520697f0d75", "text": "Neutral-point-clamped (NPC) inverters are the most widely used topology of multilevel inverters in high-power applications (several megawatts). This paper presents in a very simple way the basic operation and the most used modulation and control techniques developed to date. Special attention is paid to the loss distribution in semiconductors, and an active NPC inverter is presented to overcome this problem. This paper discusses the main fields of application and presents some technological problems such as capacitor balance and losses.", "title": "" }, { "docid": "913709f4fe05ba2783c3176ed00015fe", "text": "A generalization of the PWM (pulse width modulation) subharmonic method for controlling single-phase or three-phase multilevel voltage source inverters (VSIs) is considered. Three multilevel PWM techniques for VSI inverters are presented. An analytical expression of the spectral components of the output waveforms covering all the operating conditions is derived. The analysis is based on an extension of Bennet's method. The improvements in harmonic spectrum are pointed out, and several examples are presented which prove the validity of the multilevel modulation. Improvements in the harmonic contents were achieved due to the increased number of levels.<<ETX>>", "title": "" } ]
[ { "docid": "ba79dd4818facbf0cef50bb1422f43e6", "text": "A nonlinear energy operator (NEO) gives an estimate of the energy content of a linear oscillator. This has been used to quantify the AM-FM modulating signals present in a sinusoid. Here, the authors give a new interpretation of NEO and extend its use in stochastic signals. They show that NEO accentuates the high-frequency content. This instantaneous nature of NEO and its very low computational burden make it an ideal tool for spike detection. The efficacy of the proposed method has been tested with simulated signals as well as with real electroencephalograms (EEGs).", "title": "" }, { "docid": "2ec0db3840965993e857b75bd87a43b7", "text": "Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution.\n In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.", "title": "" }, { "docid": "d8cd05b5a187e8bc3eacd8777fb36218", "text": "In this article we review bony changes resulting from alterations in intracranial pressure (ICP) and the implications for ophthalmologists and the patients for whom we care. Before addressing ophthalmic implications, we will begin with a brief overview of bone remodeling. Bony changes seen with chronic intracranial hypotension and hypertension will be discussed. The primary objective of this review was to bring attention to bony changes seen with chronic intracranial hypotension. Intracranial hypotension skull remodeling can result in enophthalmos. In advanced disease enophthalmos develops to a degree that is truly disfiguring. The most common finding for which subjects are referred is ocular surface disease, related to loss of contact between the eyelids and the cornea. Other abnormalities seen include abnormal ocular motility and optic atrophy. Recognition of such changes is important to allow for diagnosis and treatment prior to advanced clinical deterioration. Routine radiographic assessment of bony changes may allow for the identification of patient with abnormal ICP prior to the development of clinically significant disease.", "title": "" }, { "docid": "7ebe34ac6e43f2b810a4dc889629fd07", "text": "The aim of this study was to apply the cognitive behavioral model of problematic Internet use to the context of online game use to obtain a better understanding of problematic use of online games and its negative consequences. In total, 597 online game playing adolescents aged 12–22 years participated in this study. Results showed that the cognitive behavioral model of problematic Internet use can also be used in the context of online game use. More specifically, preference for online social interaction, mood regulation and deficient self-regulation appeared to play an important role in predicting negative outcomes from problematic online game use. Together, these cognitions and behaviors explained 79% of the variance of negative outcomes scores. These findings can be used to develop strategies that aim at reducing problematic online game behavior and its negative consequences. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "19d554b2ef08382418979bf7ceb15baf", "text": "In this paper, we address the cross-lingual topic modeling, which is an important technique that enables global enterprises to detect and compare topic trends across global markets. Previous works in cross-lingual topic modeling have proposed methods that utilize parallel or comparable corpus in constructing the polylingual topic model. However, parallel or comparable corpus in many cases are not available. In this research, we incorporate techniques of mapping cross-lingual word space and the topic modeling (LDA) and propose two methods: Translated Corpus with LDA (TC-LDA) and Post Match LDA (PM-LDA). The cross-lingual word space mapping allows us to compare words of different languages, and LDA enables us to group words into topics. Both TC-LDA and PM-LDA do not need parallel or comparable corpus and hence have more applicable domains. The effectiveness of both methods is evaluated using UM-Corpus and WS-353. Our evaluation results indicate that both methods are able to identify similar documents written in different language. In addition, PM-LDA is shown to achieve better performance than TC-LDA, especially when document length is short.", "title": "" }, { "docid": "c8dbc63f90982e05517bbdb98ebaeeb5", "text": "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotionannotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion.", "title": "" }, { "docid": "d72f47ad136ebb9c74abe484980b212f", "text": "This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Qlearning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.", "title": "" }, { "docid": "b894e6a16f5082bc3c28894fedc87232", "text": "Goal: The use of an online game for learning in higher education aims to make complex theoretical knowledge more approachable. Permanent repetition will lead to a more in-depth learning. Objective: To gain insight into whether and to what extent, online games have the potential to contribute to student learning in higher education. Experimental Setting: The online game was used for the first time during a lecture on Structural Concrete at Master’s level, involving 121 seventh semester students. Methods: Pretest/posttest experimental control group design with questionnaires and an independent online evaluation. Results: The minimum learning result of playing the game was equal to that achieved with traditional methods. A factor called “joy” was introduced, according to Nielsen (2002), which was amazingly high. Conclusion: The experimental findings support the efficacy of game playing. Students enjoyed this kind of e-Learning.", "title": "" }, { "docid": "4ed4b86c8ac90cd1fd953ccd08e652bf", "text": "Dynamic graphs are a powerful way to model an evolving set of objects and their ongoing interactions. A broad spectrum of systems, such as information, communication, and social, are naturally represented by dynamic graphs. Outlier (or anomaly) detection in dynamic graphs can provide unique insights into the relationships of objects and identify novel or emerging relationships. To date, outlier detection in dynamic graphs has been studied in the context of graph streams, focusing on the analysis and comparison of entire graph objects. However, the volume and velocity of data are necessitating a transition from outlier detection in the context of graph streams to outlier detection in the context of edge streams–where the stream consists of individual graph edges instead of entire graph objects. In this paper, we propose the first approach for outlier detection in edge streams. We first describe a highlevel model for outlier detection based on global and local structural properties of a stream. We propose a novel application of the Count-Min sketch for approximating these properties, and prove probabilistic error bounds on our edge outlier scoring functions. Our sketch-based implementation provides a scalable solution, having constant time updates and constant space requirements. Experiments on synthetic and real world datasets demonstrate our method’s scalability, effectiveness for discovering outliers, and the effects of approximation.", "title": "" }, { "docid": "850becfa308ce7e93fea77673db8ab50", "text": "Controlled generation of text is of high practical use. Recent efforts have made impressive progress in generating or editing sentences with given textual attributes (e.g., sentiment). This work studies a new practical setting of text content manipulation. Given a structured record, such as (PLAYER: Lebron, POINTS: 20, ASSISTS: 10), and a reference sentence, such as Kobe easily dropped 30 points, we aim to generate a sentence that accurately describes the full content in the record, with the same writing style (e.g., wording, transitions) of the reference. The problem is unsupervised due to lack of parallel data in practice, and is challenging to minimally yet effectively manipulate the text (by rewriting/adding/deleting text portions) to ensure fidelity to the structured content. We derive a dataset from a basketball game report corpus as our testbed, and develop a neural method with unsupervised competing objectives and explicit content coverage constraints. Automatic and human evaluations show superiority of our approach over competitive methods including a strong rule-based baseline and prior approaches designed for style transfer.", "title": "" }, { "docid": "298d0770cb97f124b06268f6de5b144f", "text": "Cerebral blood flow (CBF) is coupled to neuronal activity and is imaged in vivo to map brain activation. CBF is also modified by afferent projection fibres that release vasoactive neurotransmitters in the perivascular region, principally on the astrocyte endfeet that outline cerebral blood vessels. However, the role of astrocytes in the regulation of cerebrovascular tone remains uncertain. Here we determine the impact of intracellular Ca2+ concentrations ([Ca2+]i) in astrocytes on the diameter of small arterioles by using two-photon Ca2+ uncaging to increase [Ca2+]i. Vascular constrictions occurred when Ca2+ waves evoked by uncaging propagated into the astrocyte endfeet and caused large increases in [Ca2+]i. The vasoactive neurotransmitter noradrenaline increased [Ca2+]i in the astrocyte endfeet, the peak of which preceded the onset of arteriole constriction. Depressing increases in astrocyte [Ca2+]i with BAPTA inhibited the vascular constrictions in noradrenaline. We find that constrictions induced in the cerebrovasculature by increased [Ca2+]i in astrocyte endfeet are generated through the phospholipase A2–arachidonic acid pathway and 20-hydroxyeicosatetraenoic acid production. Vasoconstriction by astrocytes is a previously unknown mechanism for the regulation of CBF.", "title": "" }, { "docid": "163c0be28804445bd99ad3e4a4e2c6dd", "text": "We are witnessing a confluence between applied cryptography and secure hardware systems in enabling secure cloud computing. On one hand, work in applied cryptography has enabled efficient, oblivious data-structures and memory primitives. On the other, secure hardware and the emergence of Intel SGX has enabled a low-overhead and mass market mechanism for isolated execution. By themselves these technologies have their disadvantages. Oblivious memory primitives carry high performance overheads, especially when run non-interactively. Intel SGX, while more efficient, suffers from numerous softwarebased side-channel attacks, high context switching costs, and bounded memory size. In this work we build a new library of oblivious memory primitives, which we call ZeroTrace. ZeroTrace is designed to carefully combine state-of-the-art oblivious RAM techniques and SGX, while mitigating individual disadvantages of these technologies. To the best of our knowledge, ZeroTrace represents the first oblivious memory primitives running on a real secure hardware platform. ZeroTrace simultaneously enables a dramatic speed-up over pure cryptography and protection from softwarebased side-channel attacks. The core of our design is an efficient and flexible block-level memory controller that provides oblivious execution against any active software adversary, and across asynchronous SGX enclave terminations. Performance-wise, the memory controller can service requests for 4 B blocks in 1.2 ms and 1 KB blocks in 3.4 ms (given a 10 GB dataset). On top of our memory controller, we evaluate Set/Dictionary/List interfaces which can all perform basic operations (e.g., get/put/insert).", "title": "" }, { "docid": "a8d02f362ba8210488e4dea1a1bf9b6f", "text": "BACKGROUND\nThe AMNOG regulation, introduced in 2011 in Germany, changed the game for new drugs. Now, the industry is required to submit a dossier to the GBA (the central decision body in the German sickness fund system) to show additional benefit. After granting the magnitude of the additional benefit by the GBA, the manufacturer is entitled to negotiate the reimbursement price with the GKV-SV (National Association of Statutory Health Insurance Funds). The reimbursement price is defined as a discount on the drug price at launch. As the price or discount negotiations between the manufacturers and the GKV-SV takes place behind closed doors, the factors influencing the results of the negotiation are not known.\n\n\nOBJECTIVES\nThe aim of this evaluation is to identify factors influencing the results of the AMNOG price negotiation process.\n\n\nMETHODS\nThe analysis was based on a dataset containing detailed information on all assessments until the end of 2015. A descriptive analysis was followed by an econometric analysis of various potential factors (benefit rating, size of target population, deviating from appropriate comparative therapy and incorporation of HRQoL-data).\n\n\nRESULTS\nUntil December 2015, manufacturers and the GKV-SV finalized 96 negotiations in 193 therapeutic areas, based on assessment conducted by the GBA. The GBA has granted an additional benefit to 100/193 drug innovations. Negotiated discount was significantly higher for those drugs without additional benefit (p = 0.030) and non-orphan drugs (p = 0.015). Smaller population size, no deviation from recommended appropriate comparative therapy and the incorporation of HRQoL-data were associated with a lower discount on the price at launch. However, neither a uni- nor the multivariate linear regression showed enough power to predict the final discount.\n\n\nCONCLUSIONS\nAlthough the AMNOG regulation implemented binding and strict rules for the benefit assessment itself, the outcome of the discount negotiations are still unpredictable. Obviously, negotiation tactics, the current political situation and soft factors seem to play a more influential role for the outcome of the negotiations than the five hard and known factors analyzed in this study. Further research is needed to evaluate additional factors.", "title": "" }, { "docid": "e9746cc48624d7ce494af43e3ff56cb3", "text": "Driving while being tired or distracted is dangerous. We are developing the CafeSafe app for Android phones, which fuses information from both front and back cameras and others embedded sensors on the phone to detect and alert drivers to dangerous driving conditions in and outside of the car. CarSafe uses computer vision and machine learning algorithms on the phone to monitor and detect whether the driver is tired or distracted using the front camera while at the same time tracking road conditions using the back camera. CarSafe is the first dual-camera application for smart-phones.", "title": "" }, { "docid": "8b5bf8cf3832ac9355ed5bef7922fb5c", "text": "Determining one's own position by means of a smartphone is an important issue for various applications in the fields of personal navigation or location-based services. Places like large airports, shopping malls or extensive underground parking lots require personal navigation but satellite signals and GPS connection cannot be obtained. Thus, alternative or complementary systems are needed. In this paper a system concept to integrate a foot-mounted inertial measurement unit (IMU) with an Android smartphone is presented. We developed a prototype to demonstrate and evaluate the implementation of pedestrian strapdown navigation on a smartphone. In addition to many other approaches we also fuse height measurements from a barometric sensor in order to stabilize height estimation over time. A very low-cost single-chip IMU is used to demonstrate applicability of the outlined system concept for potential commercial applications. In an experimental study we compare the achievable accuracy with a commercially available IMU. The evaluation shows very competitive results on the order of a few percent of traveled distance. Comparing performance, cost and size of the presented IMU the outlined approach carries an enormous potential in the field of indoor pedestrian navigation.", "title": "" }, { "docid": "95453b3273460d655828d0e22bf048b0", "text": "Tumor segmentation from magnetic resonance (MR) images may aid in tumor treatment by tracking the progress of tumor growth and/or shrinkage. In this paper we present the first automatic segmentation method which separates non-enhancing brain tumors from healthy tissues in MR images to aid in the task of tracking tumor size over time. The MR feature images used for the segmentation consist of three weighted images (T1, T2 and proton density (PD)) for each axial slice through the head. An initial segmentation is computed using an unsupervised fuzzy clustering algorithm. Then, integrated domain knowledge and image processing techniques contribute to the final tumor segmentation. They are applied under the control of a knowledge-based system. The system knowledge was acquired by training on two patient volumes (14 images). Testing has shown successful tumor segmentations on four patient volumes (31 images). Our results show that we detected all six non-enhancing brain tumors, located tumor tissue in 35 of the 36 ground truth (radiologist labeled) slices containing tumor and successfully separated tumor regions from physically connected CSF regions in all the nine slices. Quantitative measurements are promising as correspondence ratios between ground truth and segmented tumor regions ranged between 0.368 and 0.871 per volume, with percent match ranging between 0.530 and 0.909 per volume.", "title": "" }, { "docid": "e7bfafee5cfaaa1a6a41ae61bdee753d", "text": "Borderline personality disorder (BPD) has been shown to be a valid and reliable diagnosis in adolescents and associated with a decrease in both general and social functioning. With evidence linking BPD in adolescents to poor prognosis, it is important to develop a better understanding of factors and mechanisms contributing to the development of BPD. This could potentially enhance our knowledge and facilitate the design of novel treatment programs and interventions for this group. In this paper, we outline a theoretical model of BPD in adolescents linking the original mentalization-based theory of BPD, with recent extensions of the theory that focuses on hypermentalizing and epistemic trust. We then provide clinical case vignettes to illustrate this extended theoretical model of BPD. Furthermore, we suggest a treatment approach to BPD in adolescents that focuses on the reduction of hypermentalizing and epistemic mistrust. We conclude with an integration of theory and practice in the final section of the paper and make recommendations for future work in this area. (PsycINFO Database Record", "title": "" }, { "docid": "3379acb763f587851e2218fca8084117", "text": "Qualitative research includes a variety of methodological approacheswith different disciplinary origins and tools. This article discusses three commonly used approaches: grounded theory, mixed methods, and action research. It provides background for those who will encounter these methodologies in their reading rather than instructions for carrying out such research. We describe the appropriate uses, key characteristics, and features of rigour of each approach.", "title": "" }, { "docid": "91b386ef617f75dd480e44708eb5a521", "text": "The recent rise of interest in Virtual Reality (VR) came with the availability of commodity commercial VR products, such as the Head Mounted Displays (HMD) created by Oculus and other vendors. To accelerate the user adoption of VR headsets, content providers should focus on producing high quality immersive content for these devices. Similarly, multimedia streaming service providers should enable the means to stream 360 VR content on their platforms. In this study, we try to cover different aspects related to VR content representation, streaming, and quality assessment that will help establishing the basic knowledge of how to build a VR streaming system.", "title": "" }, { "docid": "efb124a26b0cdc9b022975dd83ec76c8", "text": "Apache Spark is an open-source cluster computing framework for big data processing. It has emerged as the next generation big data processing engine, overtaking Hadoop MapReduce which helped ignite the big data revolution. Spark maintains MapReduce's linear scalability and fault tolerance, but extends it in a few important ways: it is much faster (100 times faster for certain applications), much easier to program in due to its rich APIs in Python, Java, Scala (and shortly R), and its core data abstraction, the distributed data frame, and it goes far beyond batch applications to support a variety of compute-intensive tasks, including interactive queries, streaming, machine learning, and graph processing. This tutorial will provide an accessible introduction to Spark and its potential to revolutionize academic and commercial data science practices.", "title": "" } ]
scidocsrr
35fc60ea26103b80f6ad1f2f1f360b4e
2D-3D Pose Estimation of Heterogeneous Objects Using a Region Based Approach
[ { "docid": "e19b6cd095129b42be0bf0fe3f3d4a96", "text": "This work addresses the problem of estimating the 6D Pose of specific objects from a single RGB-D image. We present a flexible approach that can deal with generic objects, both textured and texture-less. The key new concept is a learned, intermediate representation in form of a dense 3D object coordinate labelling paired with a dense class labelling. We are able to show that for a common dataset with texture-less objects, where template-based techniques are suitable and state-of-the art, our approach is slightly superior in terms of accuracy. We also demonstrate the benefits of our approach, compared to template-based techniques, in terms of robustness with respect to varying lighting conditions. Towards this end, we contribute a new ground truth dataset with 10k images of 20 objects captured each under three different lighting conditions. We demonstrate that our approach scales well with the number of objects and has capabilities to run fast.", "title": "" } ]
[ { "docid": "474e7ed8e2629a6d73718de7667a68f0", "text": "The Robot Operating System (ROS) is a set of software libraries and tools used to build robotic systems. ROS is known for a distributed and modular design. Given a model of the environment, task planning is concerned with the assembly of actions into a structure that is predicted to achieve goals. This can be done in a way that minimises costs, such as time or energy. Task planning is vital in directing the actions of a robotic agent in domains where a causal chain could lock the agent into a dead-end state. Moreover, planning can be used in less constrained domains to provide more intelligent behaviour. This paper describes the ROSPLAN framework, an architecture for embedding task planning into ROS systems. We provide a description of the architecture and a case study in autonomous robotics. Our case study involves autonomous underwater vehicles in scenarios that demonstrate the flexibility and robustness of our approach.", "title": "" }, { "docid": "83e70e185e5938099ffe44ede0a11837", "text": "The 100th anniversary of Edward John Mostyn Bowlby's birth (February 26th, 1907) was celebrated at the Tavistock Clinic in London by his family and colleagues, with presentations of ongoing research as well as reflections on both the person and his theory. My own reflections include the influence of ethological thinking on the development of attachment theory, Bowlby's focus on observations followed by explanation, his appreciation of emotional communication as well as behavior, and his recognition of the role of the family as well as the child/caregiver dyad. While always remaining open to new avenues of research, John Bowlby was firmly insistent on the precise use of attachment terminology, and quite rightly too!", "title": "" }, { "docid": "08606c417ec49d44c4d2715ae96c0c43", "text": "Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today’s emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping.", "title": "" }, { "docid": "54accd4b4611426d020a047e601b3f37", "text": "Gesture recognition enables human to communicate with machine and interact naturally without any mechanical devices. The ultimate aim of gesture recognition system is to create a system which understands human gesture and use them to control various other devices. This research focuses on gesture recognition system with a radial basis function network. The radial basis function network is a 3 layer network and trained with a radial basis function algorithm to identify the classes. The complete system is implemented on a Field Programmable Gate Array with image processing unit. The system is design to identify 24 American sign-language hand signs and also real time hand gesture signs. This combination leads to maximum recognition rate. The proposed system is very small due to FPGA implementation which is highly suitable for control of equipments at home, by the handicapped people.", "title": "" }, { "docid": "31c073e6836fbc7b2525af2ec6b623d3", "text": "Background\nNeonatal mortality has persisted high in Ethiopia in spite of many efforts being applied to decrease this adverse trend. Early detection of neonatal illness is an important step towards improving newborn survival. Toward this end, there is a need for the mothers to be able to identify signs in neonates that signify severe illnesses. The aim of this study was to assess knowledge about neonatal danger signs and its associated factors among postnatal mothers attending at Woldia general hospital, Ethiopian.\n\n\nMethods\nInstitutional based cross-sectional study design was conducted from January-May, 2017. The hospital that provides antenatal care (ANC), delivery, and postnatal services was purposively sampled. Structured interviewer managed questionnaire was administered to postnatal mothers attending Woldia general hospital. Frequencies, bivariate and multivariate logistic regression were determined using the SPSS software (Version 20).\n\n\nResults\nDuring the study period 197 mothers attending postnatal care (PNC) service at Woldia general hospital were interviewed. Information on different neonatal danger signs was not provided to 92(46.7%) postnatal mothers during their antenatal clinic attendance by the healthcare providers. The majority of mothers, 174(88.3%) identified less than six neonatal danger signs. The hotness of the body of neonates was the commonly recognized danger sign by 106(53.8%) postnatal mothers. Of the total mothers, 67(34%), 60(30.5%), 56(28.4%), 44(22.3%) recognized unable to breastfeeding, convulsion, lethargy, difficulty in breathing as newly born danger signs, respectively. Out of 197 mothers, 32(16.2%) were giving birth at home. Mother's age(AOR = 1.33, 95% CI: 1.99-3.08), marital status(AOR = 2.50, 95% CI: 0.29-4.31), mother's education status(AOR = 3.48, 95% CI:1.57-8.72), husband's education(AOR = 4.92, 95% CI: 1.29-12.81), attending ANC (AOR = 2.88, 95% CI: 1.15, 4.85), mother's residence(AOR = 0.78, 95% CI: 0.47-1.65), information about neonatal danger signs(AOR = 3.48, 95% CI 1.40-9.49) had positive association with maternal level of knowledge to identify different neonatal danger signs.\n\n\nConclusion\nMaternal knowledge level about neonatal danger signs was very low. Therefore, intervention modalities that focus on increasing level of parental education, access to ANC and PNC service are needed.", "title": "" }, { "docid": "97281ba9e6da8460f003bb860836bb10", "text": "In this letter, a novel miniaturized periodic element for constructing a bandpass frequency selective surface (FSS) is proposed. Compared to previous miniaturized structures, the FSS proposed has better miniaturization performance with the dimension of a unit cell only 0.061 λ × 0.061 λ , where λ represents the wavelength of the resonant frequency. Moreover, the miniaturization characteristic is stable with respect to different polarizations and incident angles of the waves illuminating. Both simulation and measurement are taken, and the results obtained demonstrate the claimed performance.", "title": "" }, { "docid": "b9daa134744b8db757fc0857f479bd70", "text": "Influence is a complex and subtle force that governs the dynamics of social networks as well as the behaviors of involved users. Understanding influence can benefit various applications such as viral marketing, recommendation, and information retrieval. However, most existing works on social influence analysis have focused on verifying the existence of social influence. Few works systematically investigate how to mine the strength of direct and indirect influence between nodes in heterogeneous networks.\n To address the problem, we propose a generative graphical model which utilizes the heterogeneous link information and the textual content associated with each node in the network to mine topic-level direct influence. Based on the learned direct influence, a topic-level influence propagation and aggregation algorithm is proposed to derive the indirect influence between nodes. We further study how the discovered topic-level influence can help the prediction of user behaviors. We validate the approach on three different genres of data sets: Twitter, Digg, and citation networks. Qualitatively, our approach can discover interesting influence patterns in heterogeneous networks. Quantitatively, the learned topic-level influence can greatly improve the accuracy of user behavior prediction.", "title": "" }, { "docid": "c6aaacf5207f561f70b7ec6c738bb5f0", "text": "Skeletal bone age assessment is a common clinical practice to diagnose endocrine and metabolic disorders in child development. In this paper, we describe a fully automated deep learning approach to the problem of bone age assessment using data from the 2017 Pediatric Bone Age Challenge organized by the Radiological Society of North America. The dataset for this competition consists of 12,600 radiological images. Each radiograph in this dataset is an image of a left hand labeled with bone age and sex of a patient. Our approach utilizes several deep neural network architectures trained end-to-end. We use images of whole hands as well as specific parts of a hand for both training and prediction. This approach allows us to measure the importance of specific hand bones for automated bone age analysis. We further evaluate the performance of the suggested method in the context of skeletal development stages. Our approach outperforms other common methods for bone age assessment.", "title": "" }, { "docid": "8533b47323e9de6fb24e88a49c3e52fa", "text": "An ontology is a set of deenitions of content-speciic knowledge representation prim-itives: classes, relations, functions, and object constants. Ontolingua is mechanism for writing ontologies in a canonical format, such that they can be easily translated into a variety of representation and reasoning systems. This allows one to maintain the ontol-ogy in a single, machine-readable form while using it in systems with diierent syntax and reasoning capabilities. The syntax and semantics are based on the KIF knowledge interchange format 11]. Ontolingua extends KIF with standard primitives for deening classes and relations, and organizing knowledge in object-centered hierarchies with inheritance. The Ontolingua software provides an architecture for translating from KIF-level sentences into forms that can be eeciently stored and reasoned about by target representation systems. Currently, there are translators into LOOM, Epikit, and Algernon, as well as a canonical form of KIF. This paper describes the basic approach of Ontolingua to the ontology sharing problem, introduces the syntax, and describes the semantics of a few ontological commitments made in the software. Those commitments, which are reeected in the on-tolingua syntax and the primitive vocabulary of the frame ontology, include: a distinction between deenitional and nondeenitional assertions; the organization of knowledge with classes, instances, sets, and second-order relations; and assertions whose meaning depends on the contents of the knowledge base. Limitations of Ontolingua's \\conser-vative\" approach to sharing ontologies and alternative approaches to the problem are discussed.", "title": "" }, { "docid": "88ea1a18f6b12fca07a804baab390a4a", "text": "One of the brain's key roles is to facilitate foraging and feeding. It is presumably no coincidence, then, that the mouth is situated close to the brain in most animal species. However, the environments in which our brains evolved were far less plentiful in terms of the availability of food resources (i.e., nutriments) than is the case for those of us living in the Western world today. The growing obesity crisis is but one of the signs that humankind is not doing such a great job in terms of optimizing the contemporary food landscape. While the blame here is often put at the doors of the global food companies - offering addictive foods, designed to hit 'the bliss point' in terms of the pleasurable ingredients (sugar, salt, fat, etc.), and the ease of access to calorie-rich foods - we wonder whether there aren't other implicit cues in our environments that might be triggering hunger more often than is perhaps good for us. Here, we take a closer look at the potential role of vision; Specifically, we question the impact that our increasing exposure to images of desirable foods (what is often labelled 'food porn', or 'gastroporn') via digital interfaces might be having, and ask whether it might not inadvertently be exacerbating our desire for food (what we call 'visual hunger'). We review the growing body of cognitive neuroscience research demonstrating the profound effect that viewing such images can have on neural activity, physiological and psychological responses, and visual attention, especially in the 'hungry' brain.", "title": "" }, { "docid": "dbbd98ed1a7ee32ab9626a923925c45d", "text": "In this paper, we present the gated selfmatching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3% on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9%. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model.", "title": "" }, { "docid": "b552ec54acdbd6e88bc9c8e3e5363299", "text": "Resumo: A gestão do Conhecimento (GC) é um tema que vem despertando o interesse de muitos pesquisadores nas últimas décadas, sendo grande parte das contribuições orientadas por etapas, denominadas processo de GC. Por se tratar de um tema abrangente, as publicações sobre o processo de GC apresentam contribuições multidisciplinares e, desta forma, esta pesquisa tem por objetivo conceituar este processo, analisando as principais abordagens que orientam o estudo de cada etapa, e, também, levantar as principais publicações que tratam do tema, classificando‐as quanto à sua área de contribuição. Para alcançar estes objetivos, este artigo é orientado por uma pesquisa teórico‐conceitual, na qual foram estudados 71 artigos. Os resultados desta pesquisa apontam que o processo de GC é constituído de quatro etapas: aquisição, armazenamento, distribuição e utilização do conhecimento. Na fase de aquisição, as temáticas estudadas são aprendizagem organizacional, absorção de conhecimento, processo criativo e transformação do conhecimento. Na fase de armazenamento, as contribuições tratam do indivíduo, organização e tecnologia da informação, enquanto na fase de distribuição os estudos concentram‐se nas temáticas contato social, comunidade de prática e compartilhamento via tecnologia de informação. E, por fim, na fase de utilização, são abordados os temas forma de utilização, capacidade dinâmica e recuperação e transformação do conhecimento. Palavras-chave: Processo de gestão do conhecimento; Aquisição de conhecimento; Armazenamento de conhecimento; Distribuição de conhecimento; Utilização de conhecimento; Pesquisa teórico‐conceitual. Abstract: Knowledge Management (KM) is a subject that has aroused the interest of many researchers in the last decade, being great part of contributions driven by steps, named KM process. Because it is an embracing theme, publications about KM process have multidisciplinary contributions and, thus, this research aims to conceptualize this process, analyzing the main approach that guides the study of each stage, and also, to raise the main publications on the subject, classifying them as to their contribution area. To reach these goals, this article is oriented by a theoretical-conceptual research, in which 71 articles were studied. The results indicate that the KM process consists of four stages: acquisition, storage, distribution, and use of knowledge. In the acquisition phase, the studied themes are organizational learning, knowledge inception, creative process and knowledge transformation. In the storage phase, the contributions deal with a person, an organization and information technology, while in the distribution phase the studies concentrate in social contact themes, practice community and sharing via information technology. And, finally, in the use phase, we address the form of use, dynamic capacity and retrieval and knowledge transformation.", "title": "" }, { "docid": "cc4c0a749c6a3f4ac92b9709f24f03f4", "text": "Modern GPUs with their several hundred cores and more accessible programming models are becoming attractive devices for compute-intensive applications. They are particularly well suited for applications, such as image processing, where the end result is intended to be displayed via the graphics card. One of the more versatile and powerful graphics techniques is ray tracing. However, tracing each ray of light in a scene is very computational expensive and have traditionally been preprocessed on CPUs over hours, if not days. In this paper, Nvidia’s new OptiX ray tracing engine is used to show how the power of modern graphics cards, such as the Nvidia Quadro FX 5800, can be harnessed to ray trace several scenes that represent real-life applications in real-time speeds ranging from 20.63 to 67.15 fps. Near-perfect speedup is demonstrated on dual GPUs for scenes with complex geometries. The impact on ray tracing of the recently announced Nvidia Fermi processor, is also discussed.", "title": "" }, { "docid": "bbc565d8cc780a1d68bf5384283f59db", "text": "The physiological requirements of performing exercise above the anaerobic threshold are considerably more demanding than for lower work rates. Lactic acidosis develops at a metabolic rate that is specific to the individual and the task being performed. Although numerous pyruvate-dependent mechanisms can lead to an elevated blood lactate, the increase in lactate during muscular exercise is accompanied by an increase in lactate/pyruvate ratio (i.e., increased NADH/NAD ratio). This is typically caused by an inadequate O2 supply to the mitochondria. Thus, the anaerobic threshold can be considered to be an important assessment of the ability of the cardiovascular system to supply O2 at a rate adequate to prevent muscle anaerobiosis during exercise testing. In this paper, we demonstrate, with statistical justification, that the pattern of arterial lactate and lactate/pyruvate ratio increase during exercise evidences threshold dynamics rather than the continuous exponential increase proposed by some investigators. The pattern of change in arterial bicarbonate (HCO3-) and pulmonary gas exchange supports this threshold concept. To estimate the anaerobic threshold by gas exchange methods, we measure CO2 output (VCO2) as a continuous function of O2 uptake (VO2) (V-slope analysis) as work rate is increased. The break-point in this plot reflects the obligate buffering of increasing lactic acid production by HCO3-. The anaerobic threshold measured by the V-slope analysis appears to be a sensitive index of the development of metabolic acidosis even in subjects in whom other gas exchange indexes are insensitive, owing to irregular breathing, reduced chemoreceptor sensitivity, impaired respiratory mechanics, or all of these occurrences.", "title": "" }, { "docid": "e83227e0485cf7f3ba19ce20931bbc2f", "text": "There has been an increased global demand for dermal filler injections in recent years. Although hyaluronic acid-based dermal fillers generally have a good safety profile, serious vascular complications have been reported. Here we present a typical case of skin necrosis following a nonsurgical rhinoplasty using hyaluronic acid filler. Despite various rescuing managements, unsightly superficial scars were left. It is critical for plastic surgeons and dermatologists to be familiar with the vascular anatomy and the staging of vascular complications. Any patients suspected to experience a vascular complication should receive early management under close monitoring. Meanwhile, the potentially devastating outcome caused by illegal practice calls for stricter regulations and law enforcement.", "title": "" }, { "docid": "90f4f03173418ef725210f7bcca1b973", "text": "This paper describes a visualization tool DESvisual that helps students understand and instructors teach the building blocks of symmetric encryption. In particular, the tool depicts the primitive operations required to perform the initial permutation and one Feistel round of DES using either an eight or 16 bit input. A student can trace through an encryption performed by the tool, or can be guided through an encryption or decryption, computing the output of each operation herself. This helps students to understand the primitive operations, how these operations are composed into the DES algorithm, and how functions and their composition are depicted and documented. Furthermore, the opportunity for self-study provides an instructor greater flexibility in selecting a lecture pace over this detail-filled material. Security concerns impact an ever increasing number of the applications that computer scientists design and develop. Many security problems are solved through use of cryptography. Unfortunately, computer science students commonly have difficulty with the sophisticated mathematics used in cryptographic algorithms. This problem is exacerbated by the fact ∗Communicating Author", "title": "" }, { "docid": "51c5dbc32d37777614936a77a10e42bc", "text": "During the last decade, the applications of signal processing have drastically improved with deep learning. However areas of affecting computing such as emotional speech synthesis or emotion recognition from spoken language remains challenging. In this paper, we investigate the use of a neural Automatic Speech Recognition (ASR) as a feature extractor for emotion recognition. We show that these features outperform the eGeMAPS feature set to predict the valence and arousal emotional dimensions, which means that the audio-to-text mapping learned by the ASR system contains information related to the emotional dimensions in spontaneous speech. We also examine the relationship between first layers (closer to speech) and last layers (closer to text) of the ASR and valence/arousal.", "title": "" }, { "docid": "c4776e4eafd89a98cd899750b2f8ce32", "text": "The explosive growth of the World Wide Web leads to the fast advancing development of e-commerce techniques. Recommender systems, which use personalised information filtering techniques to generate a set of items suitable to a given user, have received considerable attention. Userand item-based algorithms are two popular techniques for the design of recommender systems. These two algorithms are known to have Cold-Start problems, i.e., they are unable to effectively handle Cold-Start users who have an extremely limited number of purchase records. In this paper, we develop TrustRank, a novel recommender system which handles the Cold-Start problem by leveraging the user-trust networks which are commonly available for e-commerce applications. A user-trust network is formed by friendships or trust relationships that users specify among them. While it is straightforward to conjecture that a user-trust network is helpful for improving the accuracy of recommendations, a key challenge for using user-trust network to facilitate Cold-Start users is that these users also tend to have a very limited number of trust relationships. To address this challenge, we propose a pre-processing propagation of the Cold-Start users’ trust network. In particular, by applying the personalised PageRank algorithm, we expand the friends of a given user to include others with similar purchase records to his/her original friends. To make this propagation algorithm scalable to a large amount of users, as required by real-world recommender systems, we devise an iterative computation algorithm of the original personalised TrustRank which can incrementally compute trust vectors for Cold-Start users. We conduct extensive experiments to demonstrate the consistently improvement provided by our proposed algorithm over the existing recommender algorithms on the accuracy of recommendations for Cold-Start users.", "title": "" }, { "docid": "d15ce9f62f88a07db6fa427fae61f26c", "text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.", "title": "" }, { "docid": "86458ef92b27f6d6e4e723496f30a897", "text": "OBJECTIVE\nThe end stage of chronic obstructive pulmonary disease (COPD) is described as prolonged, and the symptom burden for patients with COPD is often high. It progresses slowly over several years and can be punctuated by abrupt exacerbations that sometimes end in sudden death or a recovery of longer or shorter duration. This makes it difficult to identify the critical junctures in order to prognosticate the progress and time of death. Patients with COPD often express a fear that the dying process is going to be difficult. There is a fear that the dyspnea will worsen and lead to death by suffocation. The present article aimed to retrospectively describe the final year of life for patients with advanced COPD with a focus on death and dying from the perspective of relatives.\n\n\nMETHOD\nInterviews were conducted with the relatives of deceased family members who had advanced COPD. In total, 13 interviews were conducted and analyzed by means of content analysis.\n\n\nRESULT\nAll relatives described the patients as having had a peaceful death that did not correspond with the worry expressed earlier by both the patients and themselves. During the final week of life, two different patterns in the progress of the illness trajectory emerged: a temporary improvement where death was unexpected and a continued deterioration where death was inevitable.\n\n\nSIGNIFICANCE OF RESULTS\nThe patients and their relatives lived with uncertainty up until the time of death. Little support for psychosocial and existential needs was available. It is essential for the nurse to create relationships with patients and relatives that enable them to talk about dying and death on their own terms.", "title": "" } ]
scidocsrr
6fcfa3114bc9f0e808369c8678040d22
Situational maturity models as instrumental artifacts for organizational design
[ { "docid": "3105a48f0b8e45857e8d48e26b258e04", "text": "Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.", "title": "" } ]
[ { "docid": "24fbee8f87355c5b0b1d60ee43b31b02", "text": "Video gaming has become a popular leisure activity in many parts of the world, and an increasing number of empirical studies examine the small minority that appears to develop problems as a result of excessive gaming. This study investigated prevalence rates and predictors of video game addiction in a sample of gamers, randomly selected from the National Population Registry of Norway (N = 3389). Results showed there were 1.4 % addicted gamers, 7.3 % problem gamers, 3.9 % engaged gamers, and 87.4 % normal gamers. Gender (being male) and age group (being young) were positively associated with addicted-, problem-, and engaged gamers. Place of birth (Africa, Asia, South- and Middle America) were positively associated with addicted- and problem gamers. Video game addiction was negatively associated with conscientiousness and positively associated with neuroticism. Poor psychosomatic health was positively associated with problem- and engaged gaming. These factors provide insight into the field of video game addiction, and may help to provide guidance as to how individuals that are at risk of becoming addicted gamers can be identified.", "title": "" }, { "docid": "3de4922096e2d9bf04ba1ea89b3b3ff1", "text": "Events of various sorts make up an important subset of the entities relevant not only in knowledge representation but also in natural language processing and numerous other fields and tasks. How to represent these in a homogeneous yet expressive, extensive, and extensible way remains a challenge. In this paper, we propose an approach based on FrameBase, a broad RDFS-based schema consisting of frames and roles. The concept of a frame, which is a very general one, can be considered as subsuming existing definitions of events. This ensures a broad coverage and a uniform representation of various kinds of events, thus bearing the potential to serve as a unified event model. We show how FrameBase can represent events from several different sources and domains. These include events from a specific taxonomy related to organized crime, events captured using schema.org, and events from DBpedia.", "title": "" }, { "docid": "7f988f0bed497857eac00dd8781a2158", "text": "BACKGROUND/PURPOSE\nHigh-intensity focused ultrasound (HIFU) has been used for skin tightening. However, there is a rising concern of irreversible adverse effects. Our aim was to evaluate the depth of thermal injury zone after HIFU energy passes through different condition.\n\n\nMATERIALS AND METHODS\nTo analyze the consistency of the HIFU device, phantom tests were performed. Simulations were performed on ex vivo porcine tissues to estimate the area of the thermal coagulation point (TCP) according to the applied energy and skin condition. The experiment was designed in three orientations: normal direction (from epidermis to fascia), reverse direction (from fascia to epidermis), and normal direction without epidermis.\n\n\nRESULTS\nThe TCP was larger and wider depending on the applied fluence and handpieces (HPs). When we measured TCP in different directions, the measured area in the normal direction was more superficially located than that in the reverse direction. The depth of the TCP in the porcine skin without epidermis was detected at 130% deeper than in skin with an intact epidermis.\n\n\nCONCLUSION\nThe affected area by HIFU is dependent on the skin condition and the characteristics of the HP and applied fluence. Considerations of these factors may be the key to minimize the unwanted adverse effects.", "title": "" }, { "docid": "720778ca4d6d8eb0fa78eecb1ebbb527", "text": "Address spoofing attacks like ARP spoofing and DDoS attacks are mostly launched in a networking environment to degrade the performance. These attacks sometimes break down the network services before the administrator comes to know about the attack condition. Software Defined Networking (SDN) has emerged as a novel network architecture in which date plane is isolated from the control plane. Control plane is implemented at a central device called controller. But, SDN paradigm is not commonly used due to some constraints like budget, limited skills to control SDN, the flexibility of traditional protocols. To get SDN benefits in a traditional network, a limited number of SDN devices can be deployed among legacy devices. This technique is called hybrid SDN. In this paper, we propose a new approach to automatically detect the attack condition and mitigate that attack in hybrid SDN. We represent the network topology in the form of a graph. A graph based traversal mechanism is adopted to indicate the location of the attacker. Simulation results show that our approach enhances the network efficiency and improves the network security Keywords—Communication system security; Network Security; ARP Spoofing Introduction", "title": "" }, { "docid": "49239993ee1c281e8384f0ce01f03fd6", "text": "With the advent of social media, our online feeds increasingly consist of short, informal, and unstructured text. This textual data can be analyzed for the purpose of improving user recommendations and detecting trends. Instagram is one of the largest social media platforms, containing both text and images. However, most of the prior research on text processing in social media is focused on analyzing Twitter data, and little attention has been paid to text mining of Instagram data. Moreover, many text mining methods rely on annotated training data, which in practice is both difficult and expensive to obtain. In this paper, we present methods for unsupervised mining of fashion attributes from Instagram text, which can enable a new kind of user recommendation in the fashion domain. In this context, we analyze a corpora of Instagram posts from the fashion domain, introduce a system for extracting fashion attributes from Instagram, and train a deep clothing classifier with weak supervision to classify Instagram posts based on the associated text. With our experiments, we confirm that word embeddings are a useful asset for information extraction. Experimental results show that information extraction using word embeddings outperforms a baseline that uses Levenshtein distance. The results also show the benefit of combining weak supervision signals using generative models instead of majority voting. Using weak supervision and generative modeling, an F1 score of 0.61 is achieved on the task of classifying the image contents of Instagram posts based solely on the associated text, which is on level with human performance. Finally, our empirical study provides one of the few available studies on Instagram text and shows that the text is noisy, that the text distribution exhibits the long-tail phenomenon, and that comment sections on Instagram are multi-lingual.", "title": "" }, { "docid": "1e40fbed88643aa696d74460dc489358", "text": "We introduce a statistical model for microarray gene expression data that comprises data calibration, the quantification of differential expression, and the quantification of measurement error. In particular, we derive a transformation h for intensity measurements, and a difference statistic Deltah whose variance is approximately constant along the whole intensity range. This forms a basis for statistical inference from microarray data, and provides a rational data pre-processing strategy for multivariate analyses. For the transformation h, the parametric form h(x)=arsinh(a+bx) is derived from a model of the variance-versus-mean dependence for microarray intensity data, using the method of variance stabilizing transformations. For large intensities, h coincides with the logarithmic transformation, and Deltah with the log-ratio. The parameters of h together with those of the calibration between experiments are estimated with a robust variant of maximum-likelihood estimation. We demonstrate our approach on data sets from different experimental platforms, including two-colour cDNA arrays and a series of Affymetrix oligonucleotide arrays.", "title": "" }, { "docid": "85ebf12dd3514f3586ea599450f0a1e6", "text": "PROBLEM\nThe healthcare system is plagued with increasing cost and poor quality outcomes. A major contributing factor for these issues is that outdated leadership practices, such as leader-centricity, linear thinking, and poor readiness for innovation, are being used in healthcare organizations.\n\n\nSOLUTION\nComplexity leadership theory provides a new framework with which healthcare leaders may practice leadership. Complexity leadership theory conceptualizes leadership as a continual process that stems from collaboration, complex systems thinking, and innovation mindsets.\n\n\nCONCLUSION\nCompared to transactional and transformational leadership concepts, complexity leadership practices hold promise to improve cost and quality in health care.", "title": "" }, { "docid": "79262b2834a9f6979d2e10d3464a279d", "text": "An interleaved totem-pole boost bridgeless rectifier with reduced reverse-recovery problems for power factor correction is proposed in this paper. The proposed converter consists of two interleaved and intercoupled totem-pole boost bridgeless converter cells. The two cells operate in phase-shift mode. Thus, the input current can be continuous with low ripple. For the individual cells, they operate alternatively in discontinuous current mode and the maximum duty ratio is 50%, which allows shifting the diode current with low di/dt rate to achieve zero-current switching off. Zero-voltage switching is achieved in the MOSFETs under low line input. Furthermore, the merits of totem-pole topology are inherited. The common-mode (CM) noise interference is rather low. And the potential capacity of bidirectional power conversion is retained. In brief, the conduction losses are reduced, reverse-recovery process is improved, and high efficiency is achieved. The interleaved totem-pole cell can also be applied to bidirectional dc/dc converters and ac/dc converters. Finally, an 800 W, 100 kHz experimental prototype was built to verify the theoretical analysis and feasibility of the proposed converter, whose efficiency is above 95.5% at full load under 90 V.", "title": "" }, { "docid": "df896e48cb4b5a364006b3a8e60a96ac", "text": "This paper describes a monocular vision based parking-slot-markings recognition algorithm, which is used to automate the target position selection of automatic parking assist system. Peak-pair detection and clustering in Hough space recognize marking lines. Specially, one-dimensional filter in Hough space is designed to utilize a priori knowledge about the characteristics of marking lines in bird's eye view edge image. Modified distance between point and line-segment is used to distinguish guideline from recognized marking line-segments. Once the guideline is successfully recognized, T-shape template matching easily recognizes dividing marking line-segments. Experiments show that proposed algorithm successfully recognizes parking slots even when adjacent vehicles occlude parking-slot-markings severely", "title": "" }, { "docid": "b1f0dbf303028211c028df13ef431f48", "text": "Dealing with uncertainty is essential for e cient reinforcement learning. There is a growing literature on uncertainty estimation for deep learning from fixed datasets, but many of the most popular approaches are poorlysuited to sequential decision problems. Other methods, such as bootstrap sampling, have no mechanism for uncertainty that does not come from the observed data. We highlight why this can be a crucial shortcoming and propose a simple remedy through addition of a randomized untrainable ‘prior’ network to each ensemble member. We prove that this approach is e cient with linear representations, provide simple illustrations of its e cacy with nonlinear representations and show that this approach scales to large-scale problems far better than previous attempts.", "title": "" }, { "docid": "b151343a4c1e365ede70a71880065aab", "text": "Cardiovascular disease (CVD) and depression are common. Patients with CVD have more depression than the general population. Persons with depression are more likely to eventually develop CVD and also have a higher mortality rate than the general population. Patients with CVD, who are also depressed, have a worse outcome than those patients who are not depressed. There is a graded relationship: the more severe the depression, the higher the subsequent risk of mortality and other cardiovascular events. It is possible that depression is only a marker for more severe CVD which so far cannot be detected using our currently available investigations. However, given the increased prevalence of depression in patients with CVD, a causal relationship with either CVD causing more depression or depression causing more CVD and a worse prognosis for CVD is probable. There are many possible pathogenetic mechanisms that have been described, which are plausible and that might well be important. However, whether or not there is a causal relationship, depression is the main driver of quality of life and requires prevention, detection, and management in its own right. Depression after an acute cardiac event is commonly an adjustment disorder than can improve spontaneously with comprehensive cardiac management. Additional management strategies for depressed cardiac patients include cardiac rehabilitation and exercise programmes, general support, cognitive behavioural therapy, antidepressant medication, combined approaches, and probably disease management programmes.", "title": "" }, { "docid": "a93833a6ad41bdc5011a992509e77c9a", "text": "We present the implementation of a largevocabulary continuous speech recognition (LVCSR) system on NVIDIA’s Tegra K1 hyprid GPU-CPU embedded platform. The system is trained on a standard 1000hour corpus, LibriSpeech, features a trigram WFST-based language model, and achieves state-of-the-art recognition accuracy. The fact that the system is realtime-able and consumes less than 7.5 watts peak makes the system perfectly suitable for fast, but precise, offline spoken dialog applications, such as in robotics, portable gaming devices, or in-car systems.", "title": "" }, { "docid": "166eafbbf7379c62a84fc08ff182ec27", "text": "Wrinkles are an extremely important contribution for enhancing the realism of human figure models. In this paper, we present an approach to generate static and dynamic wrinkles on human skin. For the static model, we consider micro and macro structures of the skin surface geometry. For the wrinkle dynamics, an approach using a biomechanical skin model is employed. The tile texture patterns in the micro structure of skin surface are created using planar Delaunay triangulation. Functions of barycentric coordinates are applied to simulate the curved ridges. The visible (macro) flexure lines which may form wrinkles are predefined edges on the micro structure. These lines act as constraints for the hierarchical triangulation process. Furtheremore, the dynamics of expressive wrinkles --controlling their depth and fold-is modeled according to the principal strain of the deformed skin surface. Bump texture mapping is used for skin rendering.", "title": "" }, { "docid": "6ca4d0021c11906bae4dbd5db9b47c80", "text": "Writing code to interact with external devices is inherently difficult, and the added demands of writing device drivers in C for kernel mode compounds the problem. This environment is complex and brittle, leading to increased development costs and, in many cases, unreliable code. Previous solutions to this problem ignore the cost of migrating drivers to a better programming environment and require writing new drivers from scratch or even adopting a new operating system. We present Decaf Drivers, a system for incrementally converting existing Linux kernel drivers to Java programs in user mode. With support from programanalysis tools, Decaf separates out performance-sensitive code and generates a customized kernel interface that allows the remaining code to be moved to Java. With this interface, a programmer can incrementally convert driver code in C to a Java decaf driver. The Decaf Drivers system achieves performance close to native kernel drivers and requires almost no changes to the Linux kernel. Thus, Decaf Drivers enables driver programming to advance into the era of modern programming languages without requiring a complete rewrite of operating systems or drivers. With five drivers converted to Java, we show that Decaf Drivers can (1) move the majority of a driver’s code out of the kernel, (2) reduce the amount of driver code, (3) detect broken error handling at compile time with exceptions, (4) gracefully evolve as driver and kernel code and data structures change, and (5) perform within one percent of native kernel-only drivers.", "title": "" }, { "docid": "41fdf1b9313d4b0510e2d7ebe0a16c62", "text": "With the development of Internet technology, online job-hunting plays an increasingly important role in job-searching. It is difficult for job hunters to solely rely on keywords retrieving to find positions which meet their needs. To solve this issue, we adopted item-based collaborative filtering algorithm for job recommendations. In this paper, we optimized the algorithm by combining position descriptions and resume information. Specifically, job preference prediction formula is optimized by historical delivery weight calculated by position descriptions and similar user weight calculated by resume information. The experiments tested on real data set have shown that our methods have a significant improvement on job recommendation results.", "title": "" }, { "docid": "5f054e52a77235bf0edb6b19e705ba0f", "text": "BACKGROUND\nThe aim of the present study was to compare the prognostic impact of anatomic resection (AR) versus non-anatomic resection (NAR) on patient survival after resection of a single hepatocellular carcinoma (HCC).\n\n\nMETHODS\nTo control for confounding variable distributions, a 1-to-1 propensity score match was applied to compare the outcomes of AR and NAR. Among 710 patients with a primary, solitary HCC of <5.0 cm in diameter that was resectable by either AR or NAR from 2003 to 2007 in Japan and Korea, 355 patients underwent NAR and 355 underwent AR of at least one section with complete removal of the portal territory containing the tumor.\n\n\nRESULTS\nOverall survival (OS) was better in the AR than NAR group (hazard ratio 1.67, 95% confidence interval 1.28-2.19, P < 0.001) while disease-free survival showed no significant difference. Significantly fewer patients in the AR than NAR group developed intrahepatic HCC recurrence and multiple intrahepatic recurrences. Patients with poorly differentiated HCC who underwent AR had improved disease-free survival and OS.\n\n\nCONCLUSIONS\nAnatomic resection decreases the risk of tumor recurrence and improves OS in patients with a primary, solitary HCC of <5.0 cm in diameter.", "title": "" }, { "docid": "e8d0eab8c5ea4c3186499aa13cc6fc56", "text": "A new multiple-input dc-dc converter realized from a modified inverse Watkins-Johnson topology is presented and analyzed. Fundamental electrical characteristics are presented and power budget equations are derived. Small signal analysis model of the propose converter is presented and studied. Two possible operation methods to achieve output voltage regulation are presented here. The analysis is verified with simulations and experiments on a prototype circuit.", "title": "" }, { "docid": "38de76b401fc385fb84858161d205ea2", "text": "In mobile edge computing systems, mobile devices can offload compute-intensive tasks to a nearby \\emph{cloudlet}, so as to save energy and extend battery life. Unlike a fully-fledged cloud, a cloudlet is a small-scale datacenter deployed at a wireless access point, and thus is highly constrained by both radio and compute resources. We show in this paper that separately optimizing the allocation of either compute or radio resource - as most existing works did - is highly \\emph{suboptimal}: the congestion of compute resource leads to the waste of radio resource, and vice versa. To address this problem, we propose a joint scheduling algorithm that allocates both radio and compute resources coordinately. Specifically, we consider a cloudlet in an Orthogonal Frequency-Division Multiplexing Access (OFDMA) system with multiple mobile devices, where we study subcarrier allocation for task offloading and CPU time allocation for task execution in the cloudlet. Simulation results show that the proposed algorithm significantly outperforms per- resource optimization, accommodating more offloading requests while achieving salient energy saving.", "title": "" }, { "docid": "8005d1bd2065a14097cf5da85b941fc1", "text": "The American Psychological Association's (APA's) stance on the psychological maturity of adolescents has been criticized as inconsistent. In its Supreme Court amicus brief in Roper v. Simmons (2005), which abolished the juvenile death penalty, APA described adolescents as developmentally immature. In its amicus brief in Hodgson v. Minnesota (1990), however, which upheld adolescents' right to seek an abortion without parental involvement, APA argued that adolescents are as mature as adults. The authors present evidence that adolescents demonstrate adult levels of cognitive capability earlier than they evince emotional and social maturity. On the basis of this research, the authors argue that it is entirely reasonable to assert that adolescents possess the necessary skills to make an informed choice about terminating a pregnancy but are nevertheless less mature than adults in ways that mitigate criminal responsibility. The notion that a single line can be drawn between adolescence and adulthood for different purposes under the law is at odds with developmental science. Drawing age boundaries on the basis of developmental research cannot be done sensibly without a careful and nuanced consideration of the particular demands placed on the individual for \"adult-like\" maturity in different domains of functioning.", "title": "" } ]
scidocsrr
36ae2309e058fb49e92fe158994350d7
Data Mining Yelp Data-Predicting rating stars from review text
[ { "docid": "8d29cf5303d9c94741a8d41ca6c71da9", "text": "Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.", "title": "" } ]
[ { "docid": "f86e3894a6c61c3734e1aabda3500ef0", "text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.", "title": "" }, { "docid": "ce7fbc80c7725a1e6841c1c30cd8ae76", "text": "The new generation of the web has opened up opportunities for developing new kinds of information systems for influencing users. For instance, one of the most prominent areas for future healthcare improvement is the role of the web in fostering improved health and healthier lifestyles. The success or failure of web information systems seems to rely on their social features. For this reason, a better understanding of the techno-social aspects of web information systems and the way they influence people is needed. This article is conceptual and theory-creating by its nature. It introduces the concept of a behavior change support system and suggests it as a key construct for web science research. The behavior change support systems are characterized by their persuasive purpose.", "title": "" }, { "docid": "49c1754d0d36122538e0a1721d1afce6", "text": "Definition of GCA (TA) . Is a chronic vasculitis of large and medium vessels. . Leads to granulomatous inflammation histologically. . Predominantly affects the cranial branches of arteries arising from the arch of the aorta. . Incidence is reported as 2.2/10 000 patient-years in the UK [1] and between 7 and 29/100 000 in population age >50 years in Europe. . Incidence rates appear higher in northern climates.", "title": "" }, { "docid": "833c110e040311909aa38b05e457b2af", "text": "The scyphozoan Aurelia aurita (Linnaeus) s. l., is a cosmopolitan species-complex which blooms seasonally in a variety of coastal and shelf sea environments around the world. We hypothesized that ephyrae of Aurelia sp.1 are released from the inner part of the Jiaozhou Bay, China when water temperature is below 15°C in late autumn and winter. The seasonal occurrence, growth, and variation of the scyphomedusa Aurelia sp.1 were investigated in Jiaozhou Bay from January 2011 to December 2011. Ephyrae occurred from May through June with a peak abundance of 2.38 ± 0.56 ind/m3 in May, while the temperature during this period ranged from 12 to 18°C. The distribution of ephyrae was mainly restricted to the coastal area of the bay, and the abundance was higher in the dock of the bay than at the other inner bay stations. Young medusae derived from ephyrae with a median diameter of 9.74 ± 1.7 mm were present from May 22. Growth was rapid from May 22 to July 2 with a maximum daily growth rate of 39%. Median diameter of the medusae was 161.80 ± 18.39 mm at the beginning of July. In August, a high proportion of deteriorated specimens was observed and the median diameter decreased. The highest average abundance is 0.62 ± 1.06 ind/km2 in Jiaozhou Bay in August. The abundance of Aurelia sp.1 medusae was low from September and then decreased to zero. It is concluded that water temperature is the main driver regulating the life cycle of Aurelia sp.1 in Jiaozhou Bay.", "title": "" }, { "docid": "238ae411572961116e47b7f6ebce974c", "text": "Coercing new programmers to adopt disciplined development practices such as thorough unit testing is a challenging endeavor. Test-driven development (TDD) has been proposed as a solution to improve both software design and testing. Test-driven learning (TDL) has been proposed as a pedagogical approach for teaching TDD without imposing significant additional instruction time.\n This research evaluates the effects of students using a test-first (TDD) versus test-last approach in early programming courses, and considers the use of TDL on a limited basis in CS1 and CS2. Software testing, programmer productivity, programmer performance, and programmer opinions are compared between test-first and test-last programming groups. Results from this research indicate that a test-first approach can increase student testing and programmer performance, but that early programmers are very reluctant to adopt a test-first approach, even after having positive experiences using TDD. Further, this research demonstrates that TDL can be applied in CS1/2, but suggests that a more pervasive implementation of TDL may be necessary to motivate and establish disciplined testing practice among early programmers.", "title": "" }, { "docid": "9a13a2baf55676f82457f47d3929a4e7", "text": "Humans are a cultural species, and the study of human psychology benefits from attention to cultural influences. Cultural psychology's contributions to psychological science can largely be divided according to the two different stages of scientific inquiry. Stage 1 research seeks cultural differences and establishes the boundaries of psychological phenomena. Stage 2 research seeks underlying mechanisms of those cultural differences. The literatures regarding these two distinct stages are reviewed, and various methods for conducting Stage 2 research are discussed. The implications of culture-blind and multicultural psychologies for society and intergroup relations are also discussed.", "title": "" }, { "docid": "3ec9f9abda7d8266d9bcbbb34d468fe6", "text": "This paper presents the Homeo-Heterostatic Value Gradients (HHVG) algorithm as a formal account on the constructive interplay between boredom and curiosity which gives rise to effective exploration and superior forward model learning. We offer an instrumental view of action selection, in which an action serves to disclose outcomes that have intrinsic meaningfulness to an agent itself. This motivated two central algorithmic ingredients: devaluation and devaluation progress, both underpin agent's cognition concerning intrinsically generated rewards. The two serve as an instantiation of homeostatic and heterostatic intrinsic motivation. A key insight from our algorithm is that the two seemingly opposite motivations can be reconciled-without which exploration and information-gathering cannot be effectively carried out. We supported this claim with empirical evidence, showing that boredom-enabled agents consistently outperformed other curious or explorative agent variants in model building benchmarks based on self-assisted experience accumulation.", "title": "" }, { "docid": "4d383a53c180d5dc4473ab9d7795639a", "text": "With pervasive applications of medical imaging in health-care, biomedical image segmentation plays a central role in quantitative analysis, clinical diagnosis, and medical intervention. Since manual annotation suffers limited reproducibility, arduous efforts, and excessive time, automatic segmentation is desired to process increasingly larger scale histopathological data. Recently, deep neural networks (DNNs), particularly fully convolutional networks (FCNs), have been widely applied to biomedical image segmentation, attaining much improved performance. At the same time, quantization of DNNs has become an active research topic, which aims to represent weights with less memory (precision) to considerably reduce memory and computation requirements of DNNs while maintaining acceptable accuracy. In this paper, we apply quantization techniques to FCNs for accurate biomedical image segmentation. Unlike existing literatures on quantization which primarily targets memory and computation complexity reduction, we apply quantization as a method to reduce overfitting in FCNs for better accuracy. Specifically, we focus on a state-of-the-art segmentation framework, suggestive annotation [26], which judiciously extracts representative annotation samples from the original training dataset, obtaining an effective small-sized balanced training dataset. We develop two new quantization processes for this framework: (1) suggestive annotation with quantization for highly representative training samples, and (2) network training with quantization for high accuracy. Extensive experiments on the MICCAI Gland dataset show that both quantization processes can improve the segmentation performance, and our proposed method exceeds the current state-of-the-art performance by up to 1%. In addition, our method has a reduction of up to 6.4x on memory usage.", "title": "" }, { "docid": "c274c85ec3749151f18adaaabeb992b5", "text": "Using SDN to configure and control a multi-site network involves writing code that handles low-level details. We describe preliminary work on a framework that takes a network description and set of policies as input, and handles all the details of deriving routes and installing flow rules in switches. The paper describes key software components and reports preliminary results.", "title": "" }, { "docid": "eab2dfb9e8e129f99e263aef38dee26b", "text": "A fully passive printable chipless RFID system is presented. The chipless tag uses the amplitude and phase of the spectral signature of a multiresonator circuit and provides 1:1 correspondence of data bits. The tag comprises of a microstrip spiral multiresonator and cross-polarized transmitting and receiving microstrip ultra-wideband disc loaded monopole antennas. The reader antenna is a log periodic dipole antenna with average 5.5-dBi gain. Firstly, a 6-bit chipless tag is designed to encode 000000 and 010101 IDs. Finally, a 35-bit chipless tag based on the same principle is presented. The tag has potentials for low-cost item tagging such as banknotes and secured documents.", "title": "" }, { "docid": "ffb985bd04ee3c3b1fac261e8acd5bdf", "text": "Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current lightfield depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. Because light-field cameras have an array of micro-lenses, the captured data allows modification of both focus and perspective viewpoints. In this paper, we develop an iterative approach to use the benefits of light-field data to estimate and remove the specular component, improving the depth estimation. The approach enables light-field data depth estimation to support both specular and diffuse scenes. We present a physically-based method that estimates one or multiple light source colors. We show our method outperforms current state-of-the-art diffuse and specular separation and depth estimation algorithms in multiple real world scenarios.", "title": "" }, { "docid": "0bb6e496cd176e85fcec98bed669e18d", "text": "Men and women clearly differ in some psychological domains. A. H. Eagly (1995) shows that these differences are not artifactual or unstable. Ideally, the next scientific step is to develop a cogent explanatory framework for understanding why the sexes differ in some psychological domains and not in others and for generating accurate predictions about sex differences as yet undiscovered. This article offers a brief outline of an explanatory framework for psychological sex differences--one that is anchored in the new theoretical paradigm of evolutionary psychology. Men and women differ, in this view, in domains in which they have faced different adaptive problems over human evolutionary history. In all other domains, the sexes are predicted to be psychologically similar. Evolutionary psychology jettisons the false dichotomy between biology and environment and provides a powerful metatheory of why sex differences exist, where they exist, and in what contexts they are expressed (D. M. Buss, 1995).", "title": "" }, { "docid": "66e16c8a22b3505ea5c459feddfb2417", "text": "The evolution of the Internet of Things leads to new opportunities for the contemporary notion of smart offices, where employees can benefit from automation to maximize their productivity and performance. However, although extensive research has been dedicated to analyze the impact of workers’ emotions on their job performance, there is still a lack of pervasive environments that take into account emotional behaviour. In addition, integrating new components in smart environments is not straightforward. To face these challenges, this article proposes an architecture for emotion aware automation platforms based on semantic event-driven rules to automate the adaptation of the workplace to the employee’s needs. The main contributions of this paper are: (i) the design of an emotion aware automation platform architecture for smart offices; (ii) the semantic modelling of the system; and (iii) the implementation and evaluation of the proposed architecture in a real scenario.", "title": "" }, { "docid": "5f72b6caef9b67dfbc1d31ad0675872d", "text": "The squirrel-cage induction motor remains the workhorse of the petrochemical industry because of its versatility and ruggedness. However, it has its limitations, which if exceeded will cause premature failure of the stator, rotor, bearings or shaft. This paper is the final abridgement and update of six previous papers for the Petroleum and Chemical Industry Committee of the IEEE Industry Applications Society presented over the last 24 years and includes the final piece dealing with shaft failures. A methodology is provided that will lead operations personnel to the most likely root causes of failure. Check-off sheets are provided to assist in the orderly collection of data to assist in the analysis. As the petrochemical industry evolves from reactive to time based, to preventive, to trending, to diagnostics, and to a predictive maintenance attitude, more and more attention to root cause analysis will be required. This paper will help provide a platform for the establishment of such an evolution. The product scope includes lowand medium-voltage squirrel-cage induction motors in the 1–3000–hp range with anti friction bearings. However, much of this material is applicable to other types and sizes.", "title": "" }, { "docid": "4455233571d9c4fca8cfa2a5eb8ef22f", "text": "This article summarizes the studies of the mechanism of electroacupuncture (EA) in the regulation of the abnormal function of hypothalamic-pituitary-ovarian axis (HPOA) in our laboratory. Clinical observation showed that EA with the effective acupoints could cure some anovulatory patients in a highly effective rate and the experimental results suggested that EA might regulate the dysfunction of HPOA in several ways, which means EA could influence some gene expression of brain, thereby, normalizing secretion of some hormones, such as GnRH, LH and E2. The effects of EA might possess a relative specificity on acupoints.", "title": "" }, { "docid": "785267ceeca3b691717677801241920c", "text": "Hepatocellular carcinoma (HCC) is one of the most frequently occurring cancers with poor prognosis, and novel diagnostic or prognostic biomarkers and therapeutic targets for HCC are urgently required. With the advance of high-resolution microarrays and massively parallel sequencing technology, lncRNAs are suggested to play critical roles in the tumorigenesis and development of human HCC. To date, dysregulation of many HCC-related lncRNAs such as HULC, HOTAIR, MALAT1, and H19 have been identified. From transcriptional \"noise\" to indispensable elements, lncRNAs may re-write the central dogma. Also, lncRNAs found in body fluids have demonstrated their utility as fluid-based noninvasive markers for clinical use and as therapeutic targets for HCC. Even though several lncRNAs have been characterized, the underlying mechanisms of their contribution to HCC remain unknown, and many important questions about lncRNAs need resolving. A better understanding of the molecular mechanism in HCC-related lncRNAs will provide a rationale for novel effective lncRNA-based targeted therapies. In this review, we highlight the emerging roles of lncRNAs in HCC, and discuss their potential clinical applications as biomarkers for the diagnosis, prognosis, monitoring and treatment of HCC.", "title": "" }, { "docid": "0cd2da131bf78526c890dae72514a8f0", "text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9cba4b6c754dc393678f6dcda2009d1b", "text": "One of the latest trends in the educational landscape is the introduction of computer programming in the K-12 classroom to develop computational thinking in students. As computational thinking is not a skill exclusively related to computer science, it is assumed – but not yet scientifically proven – that the problem solving process may be generalized and transferred to a wide variety of problems. This paper presents a research designed to test whether the use of coding in Maths classes could have a positive impact on learning outcomes of students in their mathematical skills. Therefore, the questions we want to investigate in this paper are if the use of programming in Maths classes improves (a) modeling process and reality phenomena, (b) reasoning, (c) problem formulation and problem solving, and (d) comparison and execution of procedures and algorithms. We have therefore designed a quantitative, quasi-experimental experiment with 42 participating 6th grade (11 and 12 years old) students. Results show that there is a statistically significant increase in the understanding of mathematical processes in the experimental group, which received training in Scratch.", "title": "" }, { "docid": "9dc52cd5a58077f74868f48021b390af", "text": "Background: Motor development allows infants to gain knowledge of the world but its vital role in social development is often ignored. Method: A systematic search for papers investigating the relationship between motor and social skills was conducted , including research in typical development and in Developmental Coordination Disorder, Autism Spectrum Disorders and Specific Language Impairment. R sults: The search identified 42 studies, many of which highlighted a significant relationship between motor skills and the development of social cognition, language and social interactions. Conclusions: This complex relationship requires more attention from researchers and practitioners, allowing the development of more tailored intervention techniques for those at risk of motor, social and language difficulties. Key Practitioner Message  Significant relationships exist between the development of motor skills, social cognition, language and social interactions in typical and atypical development  Practitioners should be aware of the relationships between these aspects of development and understand the impact that early motor difficulties may have on later social skills  Complex relationships between motor and social skills are evident in children with ASD, DCD and SLI  Early screening and more targeted interventions may be appropriate", "title": "" } ]
scidocsrr
7a68718a0c95772eea2a97acfb9d9bb8
Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning
[ { "docid": "7af26168ae1557d8633a062313d74b78", "text": "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.", "title": "" }, { "docid": "2d7458da22077bec73d24fc29fdc0f62", "text": "This paper studies monocular visual odometry (VO) problem. Most of existing VO algorithms are developed under a standard pipeline including feature extraction, feature matching, motion estimation, local optimisation, etc. Although some of them have demonstrated superior performance, they usually need to be carefully designed and specifically fine-tuned to work well in different environments. Some prior knowledge is also required to recover an absolute scale for monocular VO. This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs). Since it is trained and deployed in an end-to-end manner, it infers poses directly from a sequence of raw RGB images (videos) without adopting any module in the conventional VO pipeline. Based on the RCNNs, it not only automatically learns effective feature representation for the VO problem through Convolutional Neural Networks, but also implicitly models sequential dynamics and relations using deep Recurrent Neural Networks. Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems.", "title": "" } ]
[ { "docid": "6d70ac4457983c7df8896a9d31728015", "text": "This brief presents a differential transmit-receive (T/R) switch integrated in a 0.18-mum standard CMOS technology for wireless applications up to 6 GHz. This switch design employs fully differential architecture to accommodate the design challenge of differential transceivers and improve the linearity performance. It exhibits less than 2-dB insertion loss, higher than 15-dB isolation, in a 60 mumtimes40 mum area. 15-dBm power at 1-dB compression point (P1dB) is achieved without using additional techniques to enhance the linearity. This switch is suitable for differential transceiver front-ends with a moderate power level. To the best of the authors' knowledge, this is the first reported differential T/R switch in CMOS for multistandard and wideband wireless applications", "title": "" }, { "docid": "6ce7cce9253698692d270c9bd584d703", "text": "The fast decrease in cost of DNA sequencing has resulted in an enormous growth in available genome data, and hence led to an increasing demand for fast DNA analysis algorithms used for diagnostics of genetic disorders, such as cancer. One of the most computationally intensive steps in the analysis is represented by the DNA read alignment. In this paper, we present an accelerated version of BWA-MEM, one of the most popular read alignment algorithms, by implementing a heterogeneous hardware/software optimized version on the Convey HC2ex platform. A challenging factor of the BWA-MEM algorithm is the fact that it consists of not one, but three computationally intensive kernels: SMEM generation, suffix array lookup and local Smith-Waterman. Obtaining substantial speedup is hence contingent on accelerating all of these three kernels at once. The paper shows an architecture containing two hardware-accelerated kernels and one kernel optimized in software. The two hardware kernels of suffix array lookup and local Smith-Waterman are able to reach speedups of 2.8x and 5.7x, respectively. The software optimization of the SMEM generation kernel is able to achieve a speedup of 1.7x. This enables a total application acceleration of 2.6x compared to the original software version.", "title": "" }, { "docid": "2272d3ac8770f456c1cf2e461eba2da9", "text": "EXECUTiVE SUMMARY This quarter, work continued on the design and construction of a robotic fingerspelling hand. The hand is being designed to aid in communication for individuals who are both deaf and blind. In the winter quarter, research was centered on determining an effective method of actuation for the robotic hand. This spring 2008 quarter, time was spent designing the mechanisms needed to mimic the size and motions of a human hand. Several methods were used to determine a proper size for the robotic hand, including using the ManneQuinPro human modeling system to approximate the size of an average male human hand and using the golden ratio to approximate the length of bone sections within the hand. After a proper average hand size was determined, a finger mechanism was designed in the SolidWorks design program that could be built and used in the robotic hand.", "title": "" }, { "docid": "ac56eb533e3ae40b8300d4269fd2c08f", "text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.", "title": "" }, { "docid": "114492ca2cef179a39b5ad5edbc80de0", "text": "We review early and recent psychological theories of dehumanization and survey the burgeoning empirical literature, focusing on six fundamental questions. First, we examine how people are dehumanized, exploring the range of ways in which perceptions of lesser humanness have been conceptualized and demonstrated. Second, we review who is dehumanized, examining the social targets that have been shown to be denied humanness and commonalities among them. Third, we investigate who dehumanizes, notably the personality, ideological, and other individual differences that increase the propensity to see others as less than human. Fourth, we explore when people dehumanize, focusing on transient situational and motivational factors that promote dehumanizing perceptions. Fifth, we examine the consequences of dehumanization, emphasizing its implications for prosocial and antisocial behavior and for moral judgment. Finally, we ask what can be done to reduce dehumanization. We conclude with a discussion of limitations of current scholarship and directions for future research.", "title": "" }, { "docid": "5ca8d0ad48ff44e0659f916af41a7efc", "text": "Automatic retinal vessel segmentation is a fundamental step in the diagnosis of eye-related diseases, in which both thick vessels and thin vessels are important features for symptom detection. All existing deep learning models attempt to segment both types of vessels simultaneously by using a unified pixelwise loss which treats all vessel pixels with equal importance. Due to the highly imbalanced ratio between thick vessels and thin vessels (namely the majority of vessel pixels belong to thick vessels), the pixel-wise loss would be dominantly guided by thick vessels and relatively little influence comes from thin vessels, often leading to low segmentation accuracy for thin vessels. To address the imbalance problem, in this paper, we explore to segment thick vessels and thin vessels separately by proposing a three-stage deep learning model. The vessel segmentation task is divided into three stages, namely thick vessel segmentation, thin vessel segmentation and vessel fusion. As better discriminative features could be learned for separate segmentation of thick vessels and thin vessels, this process minimizes the negative influence caused by their highly imbalanced ratio. The final vessel fusion stage refines the results by further identifying non-vessel pixels and improving the overall vessel thickness consistency. The experiments on public datasets DRIVE, STARE and CHASE DB1 clearly demonstrate that the proposed threestage deep learning model outperforms the current state-of-theart vessel segmentation methods.", "title": "" }, { "docid": "a04e2df0d6ca5eae1db6569b43b897bd", "text": "Workflow technologies have become a major vehicle for easy and efficient development of scientific applications. In the meantime, state-of-the-art resource provisioning technologies such as cloud computing enable users to acquire computing resources dynamically and elastically. A critical challenge in integrating workflow technologies with resource provisioning technologies is to determine the right amount of resources required for the execution of workflows in order to minimize the financial cost from the perspective of users and to maximize the resource utilization from the perspective of resource providers. This paper suggests an architecture for the automatic execution of large-scale workflow-based applications on dynamically and elastically provisioned computing resources. Especially, we focus on its core algorithm named PBTS (Partitioned Balanced Time Scheduling), which estimates the minimum number of computing hosts required to execute a workflow within a user-specified finish time. The PBTS algorithm is designed to fit both elastic resource provisioning models such as Amazon EC2 and malleable parallel application models such as MapReduce. The experimental results with a number of synthetic workflows and several real science workflows demonstrate that PBTS estimates the resource capacity close to the theoretical low bound. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9a58dc3eada29c2b929c4442ce0ac025", "text": "Gamification is the application of game elements and game design techniques in non-game contexts to engage and motivate people to achieve their goals. Motivation is an essential requirement for effective and efficient collaboration, which is particularly challenging when people work distributedly. In this paper, we discuss the topics of collaboration, motivation, and gamification in the context of software engineering. We then introduce our long-term research goal—building a theoretical framework that defines how gamification can be used as a collaboration motivator for virtual software teams. We also highlight the roles that social and cultural issues might play in understanding the phenomenon. Finally, we give an overview of our proposed research method to foster discussion during the workshop on how to best investigate the topic. Author", "title": "" }, { "docid": "b84a82dd36b71c9a3937ca1179c8501d", "text": "Orthogonal frequency division multiplexing (OFDM) has already become a very attractive modulation scheme for many applications. Unfortunately OFDM is very sensitive to synchronization errors, one of them being phase noise, which is of great importance in modern WLAN systems which target high data rates and tend to use higher frequency bands because of the spectrum availability. In this paper we propose a linear Kalman filter as a means for tracking phase noise and its suppression. The algorithm is pilot based. The performance of the proposed method is investigated and compared with the performance of other known algorithms.", "title": "" }, { "docid": "571a4de4ac93b26d55252dab86e2a0d3", "text": "Amnestic mild cognitive impairment (MCI) is a degenerative neurological disorder at the early stage of Alzheimer’s disease (AD). This work is a pilot study aimed at developing a simple scalp-EEG-based method for screening and monitoring MCI and AD. Specifically, the use of graphical analysis of inter-channel coherence of resting EEG for the detection of MCI and AD at early stages is explored. Resting EEG records from 48 age-matched subjects (mean age 75.7 years)—15 normal controls (NC), 16 with early-stage MCI, and 17 with early-stage AD—are examined. Network graphs are constructed using pairwise inter-channel coherence measures for delta–theta, alpha, beta, and gamma band frequencies. Network features are computed and used in a support vector machine model to discriminate among the three groups. Leave-one-out cross-validation discrimination accuracies of 93.6% for MCI vs. NC (p < 0.0003), 93.8% for AD vs. NC (p < 0.0003), and 97.0% for MCI vs. AD (p < 0.0003) are achieved. These results suggest the potential for graphical analysis of resting EEG inter-channel coherence as an efficacious method for noninvasive screening for MCI and early AD.", "title": "" }, { "docid": "b00ce7fc3de34fcc31ada0f66042ef5e", "text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this secure broadcast communication in wired and wireless networks by reading this site. We offer you the best product, always and always.", "title": "" }, { "docid": "11023d6501dfef64d74e08b5d285c48c", "text": "Even though an individual’s knowledge network is known to contribute to the effectiveness and efficiency of his or her work in groups, the way that network building occurs has not been carefully investigated. In our study, activities of new product development teams were analyzed to determine the antecedents and consequences on the transactive memory systems, the moderating affect of task complexity was also considered. We examined 69 new product development projects and found that team stability, team member familiarity, and interpersonal trust had a positive impact on the transactive memory system and also had a positive influence on team learning, speed-to-market, and new product success. Further, we found that the impact of the transactive memory system on team learning, speed-to-market, and new product success was higher when there was a higher task complexity. Theoretical and managerial implications of the study findings are discussed. # 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7faf84305cd91d49d1a21af5942e9a78", "text": "Ultradian rhythms of alternating cerebral dominance have been demonstrated in humans and other mammals during waking and sleep. Human studies have used the methods of psychological testing and electroencephalography (EEG) as measurements to identify the phase of this natural endogenous rhythm. The periodicity of this rhythm approximates 1.5-3 hours in awake humans. This cerebral rhythm is tightly coupled to another ultradian rhythm known as the nasal cycle, which is regulated by the autonomic nervous system, and is exhibited by greater airflow in one nostril, later switching to the other side. This paper correlates uninostril airflow with varying ratios of verbal/spatial performance in 23 right-handed males. Relatively greater cognitive ability in one hemisphere corresponds to unilateral forced nostril breathing in the contralateral nostril. Cognitive performance ratios can be influenced by forcibly altering the breathing pattern.", "title": "" }, { "docid": "c9b6f91a7b69890db88b929140f674ec", "text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.", "title": "" }, { "docid": "8553229613282672e12a175bfaca554d", "text": "The K Nearest Neighbor (kNN) method has widely been used in the applications of data mining and machine learning due to its simple implementation and distinguished performance. However, setting all test data with the same k value in the previous kNN methods has been proven to make these methods impractical in real applications. This article proposes to learn a correlation matrix to reconstruct test data points by training data to assign different k values to different test data points, referred to as the Correlation Matrix kNN (CM-kNN for short) classification. Specifically, the least-squares loss function is employed to minimize the reconstruction error to reconstruct each test data point by all training data points. Then, a graph Laplacian regularizer is advocated to preserve the local structure of the data in the reconstruction process. Moreover, an ℓ1-norm regularizer and an ℓ2, 1-norm regularizer are applied to learn different k values for different test data and to result in low sparsity to remove the redundant/noisy feature from the reconstruction process, respectively. Besides for classification tasks, the kNN methods (including our proposed CM-kNN method) are further utilized to regression and missing data imputation. We conducted sets of experiments for illustrating the efficiency, and experimental results showed that the proposed method was more accurate and efficient than existing kNN methods in data-mining applications, such as classification, regression, and missing data imputation.", "title": "" }, { "docid": "64d711b609fb683b5679ed9f4a42275c", "text": "We address the problem of image feature learning for the applications where multiple factors exist in the image generation process and only some factors are of our interest. We present a novel multi-task adversarial network based on an encoder-discriminator-generator architecture. The encoder extracts a disentangled feature representation for the factors of interest. The discriminators classify each of the factors as individual tasks. The encoder and the discriminators are trained cooperatively on factors of interest, but in an adversarial way on factors of distraction. The generator provides further regularization on the learned feature by reconstructing images with shared factors as the input image. We design a new optimization scheme to stabilize the adversarial optimization process when multiple distributions need to be aligned. The experiments on face recognition and font recognition tasks show that our method outperforms the state-of-the-art methods in terms of both recognizing the factors of interest and generalization to images with unseen variations.", "title": "" }, { "docid": "efc341c0a3deb6604708b6db361bfba5", "text": "In recent years, data analysis has become important with increasing data volume. Clustering, which groups objects according to their similarity, has an important role in data analysis. DBSCAN is one of the most effective and popular density-based clustering algorithm and has been successfully implemented in many areas. However, it is a challenging task to determine the input parameter values of DBSCAN algorithm which are neighborhood radius Eps and minimum number of points MinPts. The values of these parameters significantly affect clustering performance of the algorithm. In this study, we propose AE-DBSCAN algorithm which includes a new method to determine the value of neighborhood radius Eps automatically. The experimental evaluations showed that the proposed method outperformed the classical method.", "title": "" }, { "docid": "f2fd1bee7b2770bbf808d8902f4964b4", "text": "Antimicrobial and antiquorum sensing (AQS) activities of fourteen ethanolic extracts of different parts of eight plants were screened against four Gram-positive, five Gram-negative bacteria and four fungi. Depending on the plant part extract used and the test microorganism, variable activities were recorded at 3 mg per disc. Among the Grampositive bacteria tested, for example, activities of Laurus nobilis bark extract ranged between a 9.5 mm inhibition zone against Bacillus subtilis up to a 25 mm one against methicillin resistant Staphylococcus aureus. Staphylococcus aureus and Aspergillus fumigatus were the most susceptible among bacteria and fungi tested towards other plant parts. Of interest is the tangible antifungal activity of a Tecoma capensis flower extract, which is reported for the first time. However, minimum inhibitory concentrations (MIC's) for both bacteria and fungi were relatively high (0.5-3.0 mg). As for antiquorum sensing activity against Chromobacterium violaceum, superior activity (>17 mm QS inhibition) was associated with Sonchus oleraceus and Laurus nobilis extracts and weak to good activity (8-17 mm) was recorded for other plants. In conclusion, results indicate the potential of these plant extracts in treating microbial infections through cell growth inhibition or quorum sensing antagonism, which is reported for the first time, thus validating their medicinal use.", "title": "" }, { "docid": "9e44f467f7fbcd2ab1c6886bbb0099c0", "text": "Email has become one of the fastest and most economical forms of communication. However, the increase of email users have resulted in the dramatic increase of spam emails during the past few years. In this paper, email data was classified using four different classifiers (Neural Network, SVM classifier, Naïve Bayesian Classifier, and J48 classifier). The experiment was performed based on different data size and different feature size. The final classification result should be ‘1’ if it is finally spam, otherwise, it should be ‘0’. This paper shows that simple J48 classifier which make a binary tree, could be efficient for the dataset which could be classified as binary tree.", "title": "" }, { "docid": "18c230517b8825b616907548829e341b", "text": "The application of small Remotely-Controlled (R/C) aircraft for aerial photography presents many unique advantages over manned aircraft due to their lower acquisition cost, lower maintenance issue, and superior flexibility. The extraction of reliable information from these images could benefit DOT engineers in a variety of research topics including, but not limited to work zone management, traffic congestion, safety, and environmental. During this effort, one of the West Virginia University (WVU) R/C aircraft, named ‘Foamy’, has been instrumented for a proof-of-concept demonstration of aerial data acquisition. Specifically, the aircraft has been outfitted with a GPS receiver, a flight data recorder, a downlink telemetry hardware, a digital still camera, and a shutter-triggering device. During the flight a ground pilot uses one of the R/C channels to remotely trigger the camera. Several hundred high-resolution geo-tagged aerial photographs were collected during 10 flight experiments at two different flight fields. A Matlab based geo-reference software was developed for measuring distances from an aerial image and estimating the geo-location of each ground asset of interest. A comprehensive study of potential Sources of Errors (SOE) has also been performed with the goal of identifying and addressing various factors that might affect the position estimation accuracy. The result of the SOE study concludes that a significant amount of position estimation error was introduced by either mismatching of different measurements or by the quality of the measurements themselves. The first issue is partially addressed through the design of a customized Time-Synchronization Board (TSB) based on a MOD 5213 embedded microprocessor. The TSB actively controls the timing of the image acquisition process, ensuring an accurate matching of the GPS measurement and the image acquisition time. The second issue is solved through the development of a novel GPS/INS (Inertial Navigation System) based on a 9-state Extended Kalman Filter (EKF). The developed sensor fusion algorithm provides a good estimation of aircraft attitude angle without the need for using expensive sensors. Through the help of INS integration, it also provides a very smooth position estimation that eliminates large jumps typically seen in the raw GPS measurements.", "title": "" } ]
scidocsrr
abff6709a040fb06045380a5d92332e3
Self-Supervised Monocular Image Depth Learning and Confidence Estimation
[ { "docid": "f03f84dd248d06049a177768f0fc8671", "text": "We propose a framework that infers mid-level visual properties of an image by learning about ordinal relationships. Instead of estimating metric quantities directly, the system proposes pairwise relationship estimates for points in the input image. These sparse probabilistic ordinal measurements are globalized to create a dense output map of continuous metric measurements. Estimating order relationships between pairs of points has several advantages over metric estimation: it solves a simpler problem than metric regression, humans are better at relative judgements, so data collection is easier, ordinal relationships are invariant to monotonic transformations of the data, thereby increasing the robustness of the system and providing qualitatively different information. We demonstrate that this frame-work works well on two important mid-level vision tasks: intrinsic image decomposition and depth from an RGB image. We train two systems with the same architecture on data from these two modalities. We provide an analysis of the resulting models, showing that they learn a number of simple rules to make ordinal decisions. We apply our algorithm to depth estimation, with good results, and intrinsic image decomposition, with state-of-the-art results.", "title": "" }, { "docid": "92cc028267bc3f8d44d11035a8212948", "text": "The limitations of current state-of-the-art methods for single-view depth estimation and semantic segmentations are closely tied to the property of perspective geometry, that the perceived size of the objects scales inversely with the distance. In this paper, we show that we can use this property to reduce the learning of a pixel-wise depth classifier to a much simpler classifier predicting only the likelihood of a pixel being at an arbitrarily fixed canonical depth. The likelihoods for any other depths can be obtained by applying the same classifier after appropriate image manipulations. Such transformation of the problem to the canonical depth removes the training data bias towards certain depths and the effect of perspective. The approach can be straight-forwardly generalized to multiple semantic classes, improving both depth estimation and semantic segmentation performance by directly targeting the weaknesses of independent approaches. Conditioning the semantic label on the depth provides a way to align the data to their physical scale, allowing to learn a more discriminative classifier. Conditioning depth on the semantic class helps the classifier to distinguish between ambiguities of the otherwise ill-posed problem. We tested our algorithm on the KITTI road scene dataset and NYU2 indoor dataset and obtained obtained results that significantly outperform current state-of-the-art in both single-view depth and semantic segmentation domain.", "title": "" }, { "docid": "fdfea6d3a5160c591863351395929a99", "text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "title": "" }, { "docid": "99582c5c50f5103f15a6777af94c6584", "text": "Depth estimation in computer vision and robotics is most commonly done via stereo vision (stereopsis), in which images from two cameras are used to triangulate and estimate distances. However, there are also numerous monocular visual cues— such as texture variations and gradients, defocus, color/haze, etc.—that have heretofore been little exploited in such systems. Some of these cues apply even in regions without texture, where stereo would work poorly. In this paper, we apply a Markov Random Field (MRF) learning algorithm to capture some of these monocular cues, and incorporate them into a stereo system. We show that by adding monocular cues to stereo (triangulation) ones, we obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone. This holds true for a large variety of environments, including both indoor environments and unstructured outdoor environments containing trees/forests, buildings, etc. Our approach is general, and applies to incorporating monocular cues together with any off-the-shelf stereo system.", "title": "" }, { "docid": "f0c08cb3e23e71bab0ff9ca73a4d7869", "text": "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives comparable performance to that of the state of art supervised methods for single view depth estimation.", "title": "" } ]
[ { "docid": "376369f5e8e9b91de8e9a188d499c740", "text": "Vision based bin picking is increasingly more di cult as the complexity of target objects increases We propose an e cient solution where complex objects are su ciently represented by simple features cues thus invariance to object complexity is established The re gion extraction algorithm utilized in our approach is capable of providing the focus of attention to the simple cues as a trigger toward recognition and pose estima tion Successful bin picking experiments of industrial objects using stereo vision tools are presented", "title": "" }, { "docid": "160fefce1158a9a70a61869d54c4c39a", "text": "We present a new approach for efficient approximate nearest neighbor (ANN) search in high dimensional spaces, extending the idea of Product Quantization. We propose a two level product and vector quantization tree that reduces the number of vector comparisons required during tree traversal. Our approach also includes a novel highly parallelizable re-ranking method for candidate vectors by efficiently reusing already computed intermediate values. Due to its small memory footprint during traversal the method lends itself to an efficient, parallel GPU implementation. This Product Quantization Tree (PQT) approach significantly outperforms recent state of the art methods for high dimensional nearest neighbor queries on standard reference datasets. Ours is the first work that demonstrates GPU performance superior to CPU performance on high dimensional, large scale ANN problems in time-critical real-world applications, like loop-closing in videos.", "title": "" }, { "docid": "447d46cb861541c0b6e542018a05b9d0", "text": "Acupuncture is currently gaining popularity as an important modality of alternative and complementary medicine in the western world. Modern neuroimaging techniques such as functional magnetic resonance imaging, positron emission tomography, and magnetoencephalography open a window into the neurobiological foundations of acupuncture. In this review, we have summarized evidence derived from neuroimaging studies and tried to elucidate both neurophysiological correlates and key experimental factors involving acupuncture. Converging evidence focusing on acute effects of acupuncture has revealed significant modulatory activities at widespread cerebrocerebellar brain regions. Given the delayed effect of acupuncture, block-designed analysis may produce bias, and acupuncture shared a common feature that identified voxels that coded the temporal dimension for which multiple levels of their dynamic activities in concert cause the processing of acupuncture. Expectation in acupuncture treatment has a physiological effect on the brain network, which may be heterogeneous from acupuncture mechanism. \"Deqi\" response, bearing clinical relevance and association with distinct nerve fibers, has the specific neurophysiology foundation reflected by neural responses to acupuncture stimuli. The type of sham treatment chosen is dependent on the research question asked and the type of acupuncture treatment to be tested. Due to the complexities of the therapeutic mechanisms of acupuncture, using multiple controls is an optimal choice.", "title": "" }, { "docid": "964af3f588eb025db7cedbe605d0268b", "text": "In this paper, we propose the new fixedsize ordinally-forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence of words into a fixed-size representation. FOFE can model the word order in a sequence using a simple ordinally-forgetting mechanism according to the positions of words. In this work, we have applied FOFE to feedforward neural network language models (FNN-LMs). Experimental results have shown that without using any recurrent feedbacks, FOFE based FNNLMs can significantly outperform not only the standard fixed-input FNN-LMs but also the popular recurrent neural network (RNN) LMs.", "title": "" }, { "docid": "9ec6d61511a4533a1622d8b3234fe59d", "text": "With the development of Web 2.0, many studies have tried to analyze tourist behavior utilizing user-generated contents. The primary purpose of this study is to propose a topic-based sentiment analysis approach, including a polarity classification and an emotion classification. We use the Latent Dirichlet Allocation model to extract topics from online travel review data and analyze the sentiments and emotions for each topic with our proposed approach. The top frequent words are extracted for each topic from online reviews on Ctrip.com. By comparing the relative importance of each topic, we conclude that many tourists prefer to provide “suggestion” reviews. In particular, we propose a new approach to classify the emotions of online reviews at the topic level utilizing an emotion lexicon, focusing on specific emotions to analyze customer complaints. The results reveal that attraction “management” obtains most complaints. These findings may provide useful insights for the development of attractions and the measurement of online destination image. Our proposed method can be used to analyze reviews from many online platforms and domains.", "title": "" }, { "docid": "a8d9b1db27530c5170f5976dfe880bcd", "text": "The success of Deep Learning and its potential use in many important safety- critical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure. Unfortunately, most of these approaches test their algorithms without comparison with other approaches. As a result, the pros and cons of the different algorithms are not well understood. Motivated by the need to accelerate progress in this very important area, we investigate the trade-offs of a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework. We also propose a new data set of benchmarks, in addition to a collection of pre- viously released testcases that can be used to compare existing methods. Our analysis not only allows a comparison to be made between different strategies, the comparison of results from different solvers also revealed implementation bugs in published methods. We expect that the availability of our benchmark and the analysis of the different approaches will allow researchers to develop and evaluate promising approaches for making progress on this important topic.", "title": "" }, { "docid": "5057c78719e2b27cd7607c9edb788700", "text": "In this era of evolving technology, there are various channels and platforms through which travelers can find tour information and share their tour experience. These include tourism websites, social network sites, blogs, forums, and various search engines such as Google, Yahoo, etc. However, information found in this way is not filtered based on travelers’ preferences. Hence, travelers face an information overflow problem.. There is also increasing demand for more information on local area attractions, such as local food, shopping spots, places of interest and so on during the tour. The goal of this research is to propose a suitable recommendation method for use in a Personalized Location-based Traveler Recommender System (PLTRS) to provide personalized tourism information to its users. A comparative study of available recommender systems and location-based services (LBS) is conducted to explore the different approaches to recommender systems and LBS technology. The effectiveness of the system based on the proposed framework is tested using various scenarios which might be faced by users.", "title": "" }, { "docid": "b2f4295cc36550bbafdb4b94f8fbee7c", "text": "Novel view synthesis aims to synthesize new images from different viewpoints of given images. Most of previous works focus on generating novel views of certain objects with a fixed background. However, for some applications, such as virtual reality or robotic manipulations, large changes in background may occur due to the egomotion of the camera. Generated images of a large-scale environment from novel views may be distorted if the structure of the environment is not considered. In this work, we propose a novel fully convolutional network, that can take advantage of the structural information explicitly by incorporating the inverse depth features. The inverse depth features are obtained from CNNs trained with sparse labeled depth values. This framework can easily fuse multiple images from different viewpoints. To fill the missing textures in the generated image, adversarial loss is applied, which can also improve the overall image quality. Our method is evaluated on the KITTI dataset. The results show that our method can generate novel views of large-scale scene without distortion. The effectiveness of our approach is demonstrated through qualitative and quantitative evaluation. . . .", "title": "" }, { "docid": "d3834e337ca661d3919674a8acc1fa0c", "text": "Relative (or receiver) operating characteristic (ROC) curves are a graphical representation of the relationship between sensitivity and specificity of a laboratory test over all possible diagnostic cutoff values. Laboratory medicine has been slow to adopt the use of ROC curves for the analysis of diagnostic test performance. In this tutorial, we discuss the advantages and limitations of the ROC curve for clinical decision making in laboratory medicine. We demonstrate the construction and statistical uses of ROC analysis, review its published applications in clinical pathology, and comment on its role in the decision analytic framework in laboratory medicine.", "title": "" }, { "docid": "141e927711efe3ee66b0512322bfee9c", "text": "Reputation systems have become an indispensable component of modern E-commerce systems, as they help buyers make informed decisions in choosing trustworthy sellers. To attract buyers and increase the transaction volume, sellers need to earn reasonably high reputation scores. This process usually takes a substantial amount of time. To accelerate this process, sellers can provide price discounts to attract users, but the underlying difficulty is that sellers have no prior knowledge on buyers’ preferences over price discounts. In this article, we develop an online algorithm to infer the optimal discount rate from data. We first formulate an optimization framework to select the optimal discount rate given buyers’ discount preferences, which is a tradeoff between the short-term profit and the ramp-up time (for reputation). We then derive the closed-form optimal discount rate, which gives us key insights in applying a stochastic bandits framework to infer the optimal discount rate from the transaction data with regret upper bounds. We show that the computational complexity of evaluating the performance metrics is infeasibly high, and therefore, we develop efficient randomized algorithms with guaranteed performance to approximate them. Finally, we conduct experiments on a dataset crawled from eBay. Experimental results show that our framework can trade 60% of the short-term profit for reducing the ramp-up time by 40%. This reduction in the ramp-up time can increase the long-term profit of a seller by at least 20%.", "title": "" }, { "docid": "9decd7c2c73cf96ace4ec2fdf9a18f26", "text": "To perform tasks specified by natural language instructions, autonomous agents need to extract semantically meaningful representations of language and map it to visual elements and actions in the environment. This problem is called task-oriented language grounding. We propose an end-to-end trainable neural architecture for task-oriented language grounding in 3D environments which assumes no prior linguistic or perceptual knowledge and requires only raw pixels from the environment and the natural language instruction as input. The proposed model combines the image and text representations using a Gated-Attention mechanism and learns a policy to execute the natural language instruction using standard reinforcement and imitation learning methods. We show the effectiveness of the proposed model on unseen instructions as well as unseen maps, both quantitatively and qualitatively. We also introduce a novel environment based on a 3D game engine to simulate the challenges of task-oriented language grounding over a rich set of instructions and environment states.", "title": "" }, { "docid": "da70744d008c2d0f76d6214e2172f1f8", "text": "Advanced mobile technology continues to shape professional environments. Smart cell phones, pocket computers and laptop computers reduce the need of users to remain close to a wired information system infrastructure and allow for task performance in many different contexts. Among the consequences are changes in technology requirements, such as the need to limit weight and size of the devices. In the current paper, we focus on the factors that users find important in mobile devices. Based on a content analysis of online user reviews that was followed by structural equation modeling, we found four factors to be significantly related with overall user evaluation, namely functionality, portability, performance, and usability. Besides the practical relevance for technology developers and managers, our research results contribute to the discussion about the extent to which previously established theories of technology adoption and use are applicable to mobile technology. We also discuss the methodological suitability of online user reviews for the assessment of user requirements, and the complementarity of automated and non-automated forms of content analysis.", "title": "" }, { "docid": "921d9dc34f32522200ddcd606d22b6b4", "text": "The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.", "title": "" }, { "docid": "2e0fb1af3cb0fdd620144eb93d55ef3e", "text": "A privacy policy is a legal document, used by websites to communicate how the personal data that they collect will be managed. By accepting it, the user agrees to release his data under the conditions stated by the policy. Privacy policies should provide enough information to enable users to make informed decisions. Privacy regulations support this by specifying what kind of information has to be provided. As privacy policies can be long and difficult to understand, users tend not to read them. Because of this, users generally agree with a policy without knowing what it states and whether aspects important to him are covered at all. In this paper we present a solution to assist the user by providing a structured way to browse the policy content and by automatically assessing the completeness of a policy, i.e. the degree of coverage of privacy categories important to the user. The privacy categories are extracted from privacy regulations, while text categorization and machine learning techniques are used to verify which categories are covered by a policy. The results show the feasibility of our approach; an automatic classifier, able to associate the right category to paragraphs of a policy with an accuracy approximating that obtainable by a human judge, can be effectively created.", "title": "" }, { "docid": "6751464cdb651ca7a801b9cdaddce233", "text": "Latency- and power-aware offloading is a promising issue in the field of mobile cloud computing today. To provide latency-aware offloading, the concept of cloudlet has evolved. However, offloading an application to the most appropriate cloudlet is still a major challenge. This paper has proposed an application-aware cloudlet selection strategy for multi-cloudlet scenario. Different cloudlets are able to process different types of applications. When a request comes from a mobile device for offloading a task, the application type is verified first. According to the application type, the most suitable cloudlet is selected among multiple cloudlets present near the mobile device. By offloading computation using the proposed strategy, the energy consumption of mobile terminals can be reduced as well as latency in application execution can be decreased. Moreover, the proposed strategy can balance the load of the system by distributing the processes to be offloaded in various cloudlets. Consequently, the probability of putting all loads on a single cloudlet can be dealt for load balancing. The proposed algorithm is implemented in the mobile cloud computing laboratory of our university. In the experimental analyses, the sorting and searching processes, numerical operations, game and web service are considered as the tasks to be offloaded to the cloudlets based on the application type. The delays involved in offloading various applications to the cloudlets located at the university laboratory, using proposed algorithm are presented. The mathematical models of total power consumption and delay for the proposed strategy are also developed in this paper.", "title": "" }, { "docid": "5182d5c7bff7ebc4b2a3491e115bd602", "text": "Planning problems are among the most important and well-studied problems in artificial intelligence. They are most typically solved by tree search algorithms that simulate ahead into the future, evaluate future states, and back-up those evaluations to the root of a search tree. Among these algorithms, Monte-Carlo tree search (MCTS) is one of the most general, powerful and widely used. A typical implementation of MCTS uses cleverly designed rules, optimised to the particular characteristics of the domain. These rules control where the simulation traverses, what to evaluate in the states that are reached, and how to back-up those evaluations. In this paper we instead learn where, what and how to search. Our architecture, which we call an MCTSnet, incorporates simulation-based search inside a neural network, by expanding, evaluating and backing-up a vector embedding. The parameters of the network are trained end-to-end using gradient-based optimisation. When applied to small searches in the well-known planning problem Sokoban, the learned search algorithm significantly outperformed MCTS baselines.", "title": "" }, { "docid": "1ec0f3975731aa45c92973024c33a9b6", "text": "This meta-analysis provides an extensive and organized summary of intervention studies in education that are grounded in motivation theory. We identified 74 published and unpublished papers that experimentally manipulated an independent variable and measured an authentic educational outcome within an ecologically valid educational context. Our analyses included 92 independent effect sizes with 38,377 participants. Our results indicated that interventions were generally effective, with an average mean effect size of d = 0.49 (95% confidence interval = [0.43, 0.56]). Although there were descriptive differences in the effect sizes across several moderator variables considered in our analyses, the only significant difference found was for the type of experimental design, with randomized designs having smaller effect sizes than quasi-experimental designs. This work illustrates the extent to which interventions and accompanying theories have been tested via experimental methods and provides information about appropriate next steps in developing and testing effective motivation interventions in education.", "title": "" }, { "docid": "a4afaa67327ee6ddb8566e8e0da96e9f", "text": "In this paper, a new face recognition technique is introduced based on the gray-level co-occurrence matrix (GLCM). GLCM represents the distributions of the intensities and the information about relative positions of neighboring pixels of an image. We proposed two methods to extract feature vectors using GLCM for face classification. The first method extracts the well-known Haralick features from the GLCM, and the second method directly uses GLCM by converting the matrix into a vector that can be used in the classification process. The results demonstrate that the second method, which uses GLCM directly, is superior to the first method that uses the feature vector containing the statistical Haralick features in both nearest neighbor and neural networks classifiers. The proposed GLCM based face recognition system not only outperforms well-known techniques such as principal component analysis and linear discriminant analysis, but also has comparable performance with local binary patterns and Gabor wavelets.", "title": "" }, { "docid": "b83a0341f2ead9c72eda4217e0f31ea2", "text": "Time-series classification has attracted considerable research attention due to the various domains where time-series data are observed, ranging from medicine to econometrics. Traditionally, the focus of time-series classification has been on short time-series data composed of a few patterns exhibiting variabilities, while recently there have been attempts to focus on longer series composed of multiple local patrepeating with an arbitrary irregularity. The primary contribution of this paper relies on presenting a method which can detect local patterns in repetitive time-series via fitting local polynomial functions of a specified degree. We capture the repetitiveness degrees of time-series datasets via a new measure. Furthermore, our method approximates local polynomials in linear time and ensures an overall linear running time complexity. The coefficients of the polynomial functions are converted to symbolic words via equi-area discretizations of the coefficients' distributions. The symbolic polynomial words enable the detection of similar local patterns by assigning the same word to similar polynomials. Moreover, a histogram of the frequencies of the words is constructed from each time-series' bag of words. Each row of the histogram enables a new representation for the series and symbolizes the occurrence of local patterns and their frequencies. In an experimental comparison against state-of-the-art baselines on repetitive datasets, our method demonstrates significant improvements in terms of prediction accuracy.", "title": "" }, { "docid": "a0de0154b53aa79dfbadc8c37d43bf69", "text": "We investigate the problem of cross-dataset adaptation for visual question answering (Visual QA). Our goal is to train a Visual QA model on a source dataset but apply it to another target one. Analogous to domain adaptation for visual recognition, this setting is appealing when the target dataset does not have a sufficient amount of labeled data to learn an \"in-domain\" model. The key challenge is that the two datasets are constructed differently, resulting in the cross-dataset mismatch on images, questions, or answers. We overcome this difficulty by proposing a novel domain adaptation algorithm. Our method reduces the difference in statistical distributions by transforming the feature representation of the data in the target dataset. Moreover, it maximizes the likelihood of answering questions (in the target dataset) correctly using the Visual QA model trained on the source dataset. We empirically studied the effectiveness of the proposed approach on adapting among several popular Visual QA datasets. We show that the proposed method improves over baselines where there is no adaptation and several other adaptation methods. We both quantitatively and qualitatively analyze when the adaptation can be mostly effective.", "title": "" } ]
scidocsrr
8a1765fe8691b8c738896d8b6262b79a
Characterizing logging practices in open-source software
[ { "docid": "e06cc2a4291c800a76fd2a107d2230e4", "text": "Surprisingly, console logs rarely help operators detect problems in large-scale datacenter services, for they often consist of the voluminous intermixing of messages from many software components written by independent developers. We propose a general methodology to mine this rich source of information to automatically detect system runtime problems. We first parse console logs by combining source code analysis with information retrieval to create composite features. We then analyze these features using machine learning to detect operational problems. We show that our method enables analyses that are impossible with previous methods because of its superior ability to create sophisticated features. We also show how to distill the results of our analysis to an operator-friendly one-page decision tree showing the critical messages associated with the detected problems. We validate our approach using the Darkstar online game server and the Hadoop File System, where we detect numerous real problems with high accuracy and few false positives. In the Hadoop case, we are able to analyze 24 million lines of console logs in 3 minutes. Our methodology works on textual console logs of any size and requires no changes to the service software, no human input, and no knowledge of the software's internals.", "title": "" } ]
[ { "docid": "a587f915047435362cbad288e5f679db", "text": "OBJECTIVES\nThe American Academy of Pediatrics recommends that children over age 2 years spend < or = 2 hours per day with screen media, because excessive viewing has been linked to a plethora of physical, academic, and behavioral problems. The primary goal of this study was to qualitatively explore how a recommendation to limit television viewing might be received and responded to by a diverse sample of parents and their school-age children.\n\n\nMETHODS\nThe study collected background data about media use, gathered a household media inventory, and conducted in-depth individual and small group interviews with 180 parents and children ages 6 to 13 years old.\n\n\nRESULTS\nMost of the children reported spending approximately 3 hours per day watching television. The average home in this sample had 4 television sets; nearly two thirds had a television in the child's bedroom, and nearly half had a television set in the kitchen or dining room. Although virtually all of the parents reported having guidelines for children's television viewing, few had rules restricting the time children spend watching television. Data from this exploratory study suggest several potential barriers to implementing a 2-hour limit, including: parents' need to use television as a safe and affordable distraction, parents' own heavy television viewing patterns, the role that television plays in the family's day-to-day routine, and a belief that children should spend their weekend leisure time as they wish. Interviews revealed that for many of these families there is a lack of concern that television viewing is a problem for their child, and there remains confusion about the boundaries of the recommendation of the American Academy of Pediatrics.\n\n\nCONCLUSIONS\nParents in this study expressed interest in taking steps toward reducing children's television time but also uncertainty about how to go about doing so. Results suggest possible strategies to reduce the amount of time children spend in front of the screen.", "title": "" }, { "docid": "ff43a7d84b7f8f6f695557268bf21b15", "text": "Microbial fuel cells (MFCs) can be used to directly generate electricity from the oxidation of dissolved organic matter, but optimization of MFCs will require that we know more about the factors that can increase power output such as the type of proton exchange system which can affect the system internal resistance. Power output in a MFC containing a proton exchange membrane was compared using a pure culture (Geobacter metallireducens) or a mixed culture (wastewater inoculum). Power output with either inoculum was essentially the same, with 40+/-1mW/m2 for G. metallireducens and 38+/-1mW/m2 for the wastewater inoculum. We also examined power output in a MFC with a salt bridge instead of a membrane system. Power output by the salt bridge MFC (inoculated with G. metallireducens) was 2.2mW/m2. The low power output was directly attributed to the higher internal resistance of the salt bridge system (19920+/-50 Ohms) compared to that of the membrane system (1286+/-1Ohms) based on measurements using impedance spectroscopy. In both systems, it was observed that oxygen diffusion from the cathode chamber into the anode chamber was a factor in power generation. Nitrogen gas sparging, L-cysteine (a chemical oxygen scavenger), or suspended cells (biological oxygen scavenger) were used to limit the effects of gas diffusion into the anode chamber. Nitrogen gas sparging, for example, increased overall Coulombic efficiency (47% or 55%) compared to that obtained without gas sparging (19%). These results show that increasing power densities in MFCs will require reducing the internal resistance of the system, and that methods are needed to control the dissolved oxygen flux into the anode chamber in order to increase overall Coulombic efficiency.", "title": "" }, { "docid": "23aa04378f4eed573d1290c6bb9d3670", "text": "The ability to compare systems from the same domain is of central importance for their introduction into complex applications. In the domains of named entity recognition and entity linking, the large number of systems and their orthogonal evaluation w.r.t. measures and datasets has led to an unclear landscape regarding the abilities and weaknesses of the different approaches. We present GERBIL—an improved platform for repeatable, storable and citable semantic annotation experiments— and its extension since being release. GERBIL has narrowed this evaluation gap by generating concise, archivable, humanand machine-readable experiments, analytics and diagnostics. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools on multiple datasets. By these means, we aim to ensure that both tool developers and end users can derive meaningful insights into the extension, integration and use of annotation applications. In particular, GERBIL provides comparable results to tool developers, simplifying the discovery of strengths and weaknesses of their implementations with respect to the state-of-the-art. With the permanent experiment URIs provided by our framework, we ensure the reproducibility and archiving of evaluation results. Moreover, the framework generates data in a machine-processable format, allowing for the efficient querying and postprocessing of evaluation results. Additionally, the tool diagnostics provided by GERBIL provide insights into the areas where tools need further refinement, thus allowing developers to create an informed agenda for extensions and end users to detect the right tools for their purposes. Finally, we implemented additional types of experiments including entity typing. GERBIL aims to become a focal point for the state-of-the-art, driving the research agenda of the community by presenting comparable objective evaluation results. Furthermore, we tackle the central problem of the evaluation of entity linking, i.e., we answer the question of how an evaluation algorithm can compare two URIs to each other without being bound to a specific knowledge base. Our approach to this problem opens a way to address the deprecation of URIs of existing gold standards for named entity recognition and entity linking, a feature which is currently not supported by the state-of-the-art. We derived the importance of this feature from usage and dataset requirements collected from the GERBIL user community, which has already carried out more than 24.000 single evaluations using our framework. Through the resulting updates, GERBIL now supports 8 tasks, 46 datasets and 20 systems.", "title": "" }, { "docid": "3e806b72bfeff89596d7fda67511cab2", "text": "Coin classification is one of the main aspects of numismatics. The introduction of an automated image-based coin classification system could assist numismatists in their everyday work and allow hobby numismatists to gain additional information on their coin collection by uploading images to a respective Web site. For Roman Republican coins, the inscription is one of the most significant features, and its recognition is an essential part in the successful research of an image-based coin recognition system. This article presents a novel way for the recognition of ancient Roman Republican coin legends. Traditional optical character recognition (OCR) strategies were designed for printed or handwritten texts and rely on binarization in the course of their recognition process. Since coin legends are simply embossed onto a piece of metal, they are of the same color as the background and binarization becomes error prone and prohibits the use of standard OCR. Therefore, the proposed method is based on state-of-the-art scene text recognition methods that are rooted in object recognition. Sift descriptors are computed for a dense grid of keypoints and are tested using support vector machines trained for each letter of the respective alphabet. Each descriptor receives a score for every letter, and the use of pictorial structures allows one to detect the optimal configuration for the lexicon words within an image; the word causing the lowest costs is recognized. Character and word recognition capabilities of the proposed method are evaluated individually; character recognition is benchmarked on three and word recognition on different datasets. Depending on the Sift configuration, lexicon, and dataset used, the word recognition rates range from 29% to 67%.", "title": "" }, { "docid": "fd61461d5033bca2fd5a2be9bfc917b7", "text": "Vehicular networks are very likely to be deployed in the coming years and thus become the most relevant form of mobile ad hoc networks. In this paper, we address the security of these networks. We provide a detailed threat analysis and devise an appropriate security architecture. We also describe some major design decisions still to be made, which in some cases have more than mere technical implications. We provide a set of security protocols, we show that they protect privacy and we analyze their robustness and efficiency.", "title": "" }, { "docid": "5168f7f952d937460d250c44b43f43c0", "text": "This letter presents the design of a coplanar waveguide (CPW) circularly polarized antenna for the central frequency 900 MHz, it comes in handy for radio frequency identification (RFID) short-range reading applications within the band of 902-928 MHz where the axial ratio of proposed antenna model is less than 3 dB. The proposed design has an axial-ratio bandwidth of 36 MHz (4%) and impedance bandwidth of 256 MHz (28.5%).", "title": "" }, { "docid": "c746d527ed6112760f7b047c922a0d46", "text": "New performance leaps has been achieved with multiprogramming and multi-core systems. Present parallel programming techniques and environment needs significant changes in programs to accomplish parallelism and also constitute complex, confusing and error-prone constructs and rules. Intel Cilk Plus is a C based computing system that presents a straight forward and well-structured model for the development, verification and analysis of multicore and parallel programming. In this article, two programs are developed using Intel Cilk Plus. Two sequential sorting programs in C/C++ language are converted to multi-core programs in Intel Cilk Plus framework to achieve parallelism and better performance. Converted program in Cilk Plus is then checked for various conditions using tools of Cilk and after that, comparison of performance and speedup achieved over the single-core sequential program is discussed and reported.", "title": "" }, { "docid": "f1c80c3e266029012390c6ac47765cc6", "text": "Whenever clients shop in the Internet, they provide identifying data of themselves to parties like the webshop, shipper and payment system. These identifying data merged with their shopping history might be misused for targeted advertisement up to possible manipulations of the clients. The data also contains credit card or bank account numbers, which may be used for unauthorized money transactions by the involved parties or by criminals hacking the parties’ computing infrastructure. In order to minimize these risks, we propose an approach for anonymous shopping by separation of data. We argue for the feasibility of our approach by discussing important operations like simple reclamation cases and criminal investigations. TYPE OF PAPER AND", "title": "" }, { "docid": "84c472e892379f2dce92890edde8d575", "text": "This paper presents a functional verification of USB2.0 Card Reader, which includes verification environment, functional coverage model design and course of debug. This system not only finds bugs in the DUT, but also verifies the compliance between hosts and device. The methods of the verification and coverage model design facilitate the verification of USB Mass Storage project which need to be accelerated into market. The system of verification has advantage of being portable to other USB Mass Storage devices.", "title": "" }, { "docid": "e95336e305ac921c01198554da91dcdb", "text": "We consider the problem of staffing call-centers with multip le customer classes and agent types operating under quality-of-service (QoS) constraints and demand rate uncertainty. We introduce a formulation of the staffing problem that requires that the Q oS constraints are met with high probability with respect to the uncertainty in the demand ra te. We contrast this chance-constrained formulation with the average-performance constraints tha t have been used so far in the literature. We then propose a two-step solution for the staffing problem u nder chance constraints. In the first step, we introduce a Random Static Planning Problem (RSPP) a nd discuss how it can be solved using two different methods. The RSPP provides us with a first -order (or fluid) approximation for the true optimal staffing levels and a staffing frontier. In the second step, we solve a finite number of staffing problems with known arrival rates–the arrival rate s on the optimal staffing frontier. Hence, our formulation and solution approach has the important pro perty that it translates the problem with uncertain demand rates to one with known arrival rates. The o utput of our procedure is a solution that is feasible with respect to the chance constraint and ne arly optimal for large call centers.", "title": "" }, { "docid": "13b879cb250509ff288f5519af381332", "text": "This paper studies the generalization performance of multi-class classification algorithms, for which we obtain—for the first time—a data-dependent generalization error bound with a logarithmic dependence on the class size, substantially improving the state-of-the-art linear dependence in the existing data-dependent generalization analysis. The theoretical analysis motivates us to introduce a new multi-class classification machine based on `p-norm regularization, where the parameter p controls the complexity of the corresponding bounds. We derive an efficient optimization algorithm based on Fenchel duality theory. Benchmarks on several real-world datasets show that the proposed algorithm can achieve significant accuracy gains over the state of the art.", "title": "" }, { "docid": "52e1c954aefca110d15c24d90de902b2", "text": "Reinforcement learning (RL) agents can benefit from adaptive exploration/exploitation behavior, especially in dynamic environments. We focus on regulating this exploration/exploitation behavior by controlling the action-selection mechanism of RL. Inspired by psychological studies which show that affect influences human decision making, we use artificial affect to influence an agent’s action-selection. Two existing affective strategies are implemented and, in addition, a new hybrid method that combines both. These strategies are tested on ‘maze tasks’ in which a RL agent has to find food (rewarded location) in a maze. We use Soar-RL, the new RL-enabled version of Soar, as a model environment. One task tests the ability to quickly adapt to an environmental change, while the other tests the ability to escape a local optimum in order to find the global optimum. We show that artificial affect-controlled action-selection in some cases helps agents to faster adapt to changes in the environment.", "title": "" }, { "docid": "99d3354d91a330e7b3bd3cc6204251ca", "text": "PHACE syndrome is a neurocutaneous disorder characterized by large cervicofacial infantile hemangiomas and associated anomalies: posterior fossa brain malformation, hemangioma, arterial cerebrovascular anomalies, coarctation of the aorta and cardiac defects, and eye/endocrine abnormalities of the brain. When ventral developmental defects (sternal clefting or supraumbilical raphe) are present the condition is termed PHACE. In this report, we describe three PHACE cases that presented unique features (affecting one of the organ systems described for this syndrome) that have not been described previously. In the first case, a definitive PHACE association, the patient presented with an ipsilateral mesenteric lymphatic malformation, at the age of 14 years. In the second case, an anomaly of the posterior segment of the eye, not mentioned before in PHACE literature, a retinoblastoma, has been described. Specific chemotherapy avoided enucleation. And, in the third case, the child presented with an unusual midline frontal bone cleft, corresponding to Tessier 14 cleft. Two patients' hemangiomas responded well to propranolol therapy. The first one was followed and treated in the pre-propranolol era and had a moderate response to corticoids and interferon.", "title": "" }, { "docid": "e2ed500ce298ea175554af97bd0f2f98", "text": "The Climate CoLab is a system to help thousands of people around the world collectively develop plans for what humans should do about global climate change. This paper shows how the system combines three design elements (model-based planning, on-line debates, and electronic voting) in a synergistic way. The paper also reports early usage experience showing that: (a) the system is attracting a continuing stream of new and returning visitors from all over the world, and (b) the nascent community can use the platform to generate interesting and high quality plans to address climate change. These initial results indicate significant progress towards an important goal in developing a collective intelligence system—the formation of a large and diverse community collectively engaged in solving a single problem.", "title": "" }, { "docid": "7b341e406c28255d3cb4df5c4665062d", "text": "We propose MRU (Multi-Range Reasoning Units), a new fast compositional encoder for machine comprehension (MC). Our proposed MRU encoders are characterized by multi-ranged gating, executing a series of parameterized contractand-expand layers for learning gating vectors that benefit from long and short-term dependencies. The aims of our approach are as follows: (1) learning representations that are concurrently aware of long and short-term context, (2) modeling relationships between intra-document blocks and (3) fast and efficient sequence encoding. We show that our proposed encoder demonstrates promising results both as a standalone encoder and as well as a complementary building block. We conduct extensive experiments on three challenging MC datasets, namely RACE, SearchQA and NarrativeQA, achieving highly competitive performance on all. On the RACE benchmark, our model outperforms DFN (Dynamic Fusion Networks) by 1.5% − 6% without using any recurrent or convolution layers. Similarly, we achieve competitive performance relative to AMANDA [17] on the SearchQA benchmark and BiDAF [23] on the NarrativeQA benchmark without using any LSTM/GRU layers. Finally, incorporating MRU encoders with standard BiLSTM architectures further improves performance, achieving state-of-the-art results.", "title": "" }, { "docid": "a5296748b0a93696e7b15f7db9d68384", "text": "Microscopic analysis of breast tissues is necessary for a definitive diagnosis of breast cancer which is the most common cancer among women. Pathology examination requires time consuming scanning through tissue images under different magnification levels to find clinical assessment clues to produce correct diagnoses. Advances in digital imaging techniques offers assessment of pathology images using computer vision and machine learning methods which could automate some of the tasks in the diagnostic pathology workflow. Such automation could be beneficial to obtain fast and precise quantification, reduce observer variability, and increase objectivity. In this work, we propose to classify breast cancer histopathology images independent of their magnifications using convolutional neural networks (CNNs). We propose two different architectures; single task CNN is used to predict malignancy and multi-task CNN is used to predict both malignancy and image magnification level simultaneously. Evaluations and comparisons with previous results are carried out on BreaKHis dataset. Experimental results show that our magnification independent CNN approach improved the performance of magnification specific model. Our results in this limited set of training data are comparable with previous state-of-the-art results obtained by hand-crafted features. However, unlike previous methods, our approach has potential to directly benefit from additional training data, and such additional data could be captured with same or different magnification levels than previous data.", "title": "" }, { "docid": "b788c55834247bc80ae935ab00b31822", "text": "How does the neocortex learn and develop the foundations of all our high-level cognitive abilities? We present a comprehensive framework spanning biological, computational, and cognitive levels, with a clear theoretical continuity between levels, providing a coherent answer directly supported by extensive data at each level. Learning is based on making predictions about what the senses will report at 100 msec (alpha frequency) intervals, and adapting synaptic weights to improve prediction accuracy. The pulvinar nucleus of the thalamus serves as a projection screen upon which predictions are generated, through deep-layer 6 corticothalamic inputs from multiple brain areas and levels of abstraction. The sparse driving inputs from layer 5 intrinsic bursting neurons provide the target signal, and the temporal difference between it and the prediction reverberates throughout the cortex, driving synaptic changes that approximate error backpropagation, using only local activation signals in equations derived directly from a detailed biophysical model. In vision, predictive learning requires a carefully-organized developmental progression and anatomical organization of three pathways (What, Where, and What * Where), according to two central principles: top-down input from compact, high-level, abstract representations is essential for accurate prediction of low-level sensory inputs; and the collective, low-level pre-representations is essential for accurate prediction of low-level sensory inputs; and the collective, low-level prediction error must be progressively and opportunistically partitioned to enable extraction of separable factors that drive the learning of further high-level abstractions. Our model self-organized systematic invariant object representations of 100 different objects from simple movies, accounts for a wide range of data, and makes many testable predictions.", "title": "" }, { "docid": "4ff50e433ba7a5da179c7d8e5e05cb22", "text": "Social network information is now being used in ways for which it may have not been originally intended. In particular, increased use of smartphones capable ofrunning applications which access social network information enable applications to be aware of a user's location and preferences. However, current models forexchange of this information require users to compromise their privacy and security. We present several of these privacy and security issues, along withour design and implementation of solutions for these issues. Our work allows location-based services to query local mobile devices for users' social network information, without disclosing user identity or compromising users' privacy and security. We contend that it is important that such solutions be acceptedas mobile social networks continue to grow exponentially.", "title": "" }, { "docid": "ad3add7522b3a58359d36e624e9e65f7", "text": "In this paper, global and local prosodic features extracted from sentence, word and syllables are proposed for speech emotion or affect recognition. In this work, duration, pitch, and energy values are used to represent the prosodic information, for recognizing the emotions from speech. Global prosodic features represent the gross statistics such as mean, minimum, maximum, standard deviation, and slope of the prosodic contours. Local prosodic features represent the temporal dynamics in the prosody. In this work, global and local prosodic features are analyzed separately and in combination at different levels for the recognition of emotions. In this study, we have also explored the words and syllables at different positions (initial, middle, and final) separately, to analyze their contribution towards the recognition of emotions. In this paper, all the studies are carried out using simulated Telugu emotion speech corpus (IITKGP-SESC). These results are compared with the results of internationally known Berlin emotion speech corpus (Emo-DB). Support vector machines are used to develop the emotion recognition models. The results indicate that, the recognition performance using local prosodic features is better compared to the performance of global prosodic features. Words in the final position of the sentences, syllables in the final position of the words exhibit more emotion discriminative information compared to the words and syllables present in the other positions. K.S. Rao ( ) · S.G. Koolagudi · R.R. Vempada School of Information Technology, Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India e-mail: ksrao@iitkgp.ac.in S.G. Koolagudi e-mail: koolagudi@yahoo.com R.R. Vempada e-mail: ramu.csc@gmail.com", "title": "" }, { "docid": "dfe82129fd128cc2e42f9ed8b3efc9c7", "text": "In this paper we present a new lossless image compression algorithm. To achieve the high compression speed we use a linear prediction, modified Golomb–Rice code family, and a very fast prediction error modeling method. We compare the algorithm experimentally with others for medical and natural continuous tone grayscale images of depths of up to 16 bits. Its results are especially good for big images, for natural images of high bit depths, and for noisy images. The average compression speed on Intel Xeon 3.06 GHz CPU is 47 MB/s. For big images the speed is over 60MB/s, i.e., the algorithm needs less than 50 CPU cycles per byte of image.", "title": "" } ]
scidocsrr
3032aff5ca6b3b0c34facf472452fdd3
Aspect-Based Sentiment Analysis Using Convolutional Neural Network and Bidirectional Long Short-Term Memory
[ { "docid": "2bfd884e92a26d017a7854be3dfb02e8", "text": "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.", "title": "" }, { "docid": "fb2287cb1c41441049288335f10fd473", "text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly", "title": "" }, { "docid": "37997245b1a6d10148819e56d978ba04", "text": "Aspect-based sentiment analysis summarizes what people like and dislike from reviews of products or services. In this paper, we adapt the first rank research at SemEval 2016 to improve the performance of aspect-based sentiment analysis for Indonesian restaurant reviews. We use six steps for aspect-based sentiment analysis i.e.: preprocess the reviews, aspect extraction, aspect categorization, sentiment classification, opinion structure generation, and rating calculation. We collect 992 sentences for experiment and 383 sentences for evaluation. We conduct experiment to find best feature combination for aspect extraction, aspect categorization, and sentiment classification. The aspect extraction, aspect categorization, and sentiment classification have F1-measure value of 0.793, 0.823, and 0.642 respectively.", "title": "" } ]
[ { "docid": "acd93c6b041a975dcf52c7bafaf05b16", "text": "Patients with carcinoma of the tongue including the base of the tongue who underwent total glossectomy in a period of just over ten years since January 1979 have been reviewed. Total glossectomy may be indicated as salvage surgery or as a primary procedure. The larynx may be preserved or may have to be sacrificed depending upon the site of the lesion. When the larynx is preserved the use of laryngeal suspension facilitates early rehabilitation and preserves the quality of life to a large extent. Cricopharyngeal myotomy seems unnecessary.", "title": "" }, { "docid": "2c56891c1c9f128553bab35d061049b8", "text": "RISC vs. CISC wars raged in the 1980s when chip area and processor design complexity were the primary constraints and desktops and servers exclusively dominated the computing landscape. Today, energy and power are the primary design constraints and the computing landscape is significantly different: growth in tablets and smartphones running ARM (a RISC ISA) is surpassing that of desktops and laptops running x86 (a CISC ISA). Further, the traditionally low-power ARM ISA is entering the high-performance server market, while the traditionally high-performance x86 ISA is entering the mobile low-power device market. Thus, the question of whether ISA plays an intrinsic role in performance or energy efficiency is becoming important, and we seek to answer this question through a detailed measurement based study on real hardware running real applications. We analyze measurements on the ARM Cortex-A8 and Cortex-A9 and Intel Atom and Sandybridge i7 microprocessors over workloads spanning mobile, desktop, and server computing. Our methodical investigation demonstrates the role of ISA in modern microprocessors' performance and energy efficiency. We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant.", "title": "" }, { "docid": "ae2445f8d8f3ccf722417923eb69fe83", "text": "This paper presents the first actor-critic algorithm for o↵-policy reinforcement learning. Our algorithm is online and incremental, and its per-time-step complexity scales linearly with the number of learned weights. Previous work on actor-critic algorithms is limited to the on-policy setting and does not take advantage of the recent advances in o↵policy gradient temporal-di↵erence learning. O↵-policy techniques, such as Greedy-GQ, enable a target policy to be learned while following and obtaining data from another (behavior) policy. For many problems, however, actor-critic methods are more practical than action value methods (like Greedy-GQ) because they explicitly represent the policy; consequently, the policy can be stochastic and utilize a large action space. In this paper, we illustrate how to practically combine the generality and learning potential of o↵policy learning with the flexibility in action selection given by actor-critic methods. We derive an incremental, linear time and space complexity algorithm that includes eligibility traces, prove convergence under assumptions similar to previous o↵-policy algorithms, and empirically show better or comparable performance to existing algorithms on standard reinforcement-learning benchmark problems.", "title": "" }, { "docid": "003fc1e182a045889206ec8b1b4b19d8", "text": "Long short-term memory (LSTM) recurrent neural network language models compress the full context of variable lengths into a fixed size vector. In this work, we investigate the task of predicting the LSTM hidden representation of the full context from a truncated n-gram context as a subtask for training an n-gram feedforward language model. Since this approach is a form of knowledge distillation, we compare two methods. First, we investigate the standard transfer based on the Kullback-Leibler divergence of the output distribution of the feedforward model from that of the LSTM. Second, we minimize the mean squared error between the hidden state of the LSTM and that of the n-gram feedforward model. We carry out experiments on different subsets of the Switchboard speech recognition dataset for feedforward models with a short (5-gram) and a medium (10-gram) context length. We show that we get improvements in perplexity and word error rate of up to 8% and 4% relative for the medium model, while the improvements are only marginal for the short model.", "title": "" }, { "docid": "d67c9703ee45ad306384bbc8fe11b50e", "text": "Approximately thirty-four percent of people who experience acute low back pain (LBP) will have recurrent episodes. It remains unclear why some people experience recurrences and others do not, but one possible cause is a loss of normal control of the back muscles. We investigated whether the control of the short and long fibres of the deep back muscles was different in people with recurrent unilateral LBP from healthy participants. Recurrent unilateral LBP patients, who were symptom free during testing, and a group of healthy volunteers, participated. Intramuscular and surface electrodes recorded the electromyographic activity (EMG) of the short and long fibres of the lumbar multifidus and the shoulder muscle, deltoid, during a postural perturbation associated with a rapid arm movement. EMG onsets of the short and long fibres, relative to that of deltoid, were compared between groups, muscles, and sides. In association with a postural perturbation, short fibre EMG onset occurred later in participants with recurrent unilateral LBP than in healthy participants (p=0.022). The short fibres were active earlier than long fibres on both sides in the healthy participants (p<0.001) and on the non-painful side in the LBP group (p=0.045), but not on the previously painful side in the LBP group. Activity of deep back muscles is different in people with a recurrent unilateral LBP, despite the resolution of symptoms. Because deep back muscle activity is critical for normal spinal control, the current results provide the first evidence of a candidate mechanism for recurrent episodes.", "title": "" }, { "docid": "40e1b652587f6c2b26dcddbe3637835b", "text": "A 60-year-old man was referred to our hospital because of dyspnea on exertion. He was diagnosed with heart failure due to an old myocardial infarction. Myocardial stress perfusion scintigraphy revealed inducible myocardial ischemia. Coronary angiography revealed hazy slit lesions in both the left anterior descending (LAD) and right coronary arteries (RCA). We first performed percutaneous coronary intervention (PCI) on the LAD lesion. Subsequently, we performed PCI for the RCA lesion using multiple imaging modalities. We observed a lotus root-like appearance in both the LAD and RCA, and PCI was successful for both vessels. We describe this rare case in detail.", "title": "" }, { "docid": "378452932d56f407643ef7d64b754f37", "text": "X-ray screening systems have been used to safeguard environments in which access control is of paramount importance. Security checkpoints have been placed at the entrances to many public places to detect prohibited items, such as handguns and explosives. Generally, human operators are in charge of these tasks as automated recognition in baggage inspection is still far from perfect. Research and development on X-ray testing is, however, exploring new approaches based on computer vision that can be used to aid human operators. This paper attempts to make a contribution to the field of object recognition in X-ray testing by evaluating different computer vision strategies that have been proposed in the last years. We tested ten approaches. They are based on bag of words, sparse representations, deep learning, and classic pattern recognition schemes among others. For each method, we: 1) present a brief explanation; 2) show experimental results on the same database; and 3) provide concluding remarks discussing pros and cons of each method. In order to make fair comparisons, we define a common experimental protocol based on training, validation, and testing data (selected from the public ${\\mathbb {GDX}}$ ray database). The effectiveness of each method was tested in the recognition of three different threat objects: 1) handguns; 2) shuriken (ninja stars); and 3) razor blades. In our experiments, the highest recognition rate was achieved by methods based on visual vocabularies and deep features with more than 95% of accuracy. We strongly believe that it is possible to design an automated aid for the human inspection task using these computer vision algorithms.", "title": "" }, { "docid": "1bf7687bbc4aef6caa9f0fe6484b8945", "text": "The role-based access control (RBAC) framework is a mechanism that describes the access control principle. As a common interaction, an organization provides a service to a user who owns a certain role that was issued by a different organization. Such trans-organizational RBAC is common in face-to-face communication but not in a computer network, because it is difficult to establish both the security that prohibits the malicious impersonation of roles and the flexibility that allows small organizations to participate and users to fully control their own roles. In this paper, we present an RBAC using smart contract (RBAC-SC), a platform that makes use of Ethereum’s smart contract technology to realize a trans-organizational utilization of roles. Ethereum is an open blockchain platform that is designed to be secure, adaptable, and flexible. It pioneered smart contracts, which are decentralized applications that serve as “autonomous agents” running exactly as programmed and are deployed on a blockchain. The RBAC-SC uses smart contracts and blockchain technology as versatile infrastructures to represent the trust and endorsement relationship that are essential in the RBAC and to realize a challenge-response authentication protocol that verifies a user’s ownership of roles. We describe the RBAC-SC framework, which is composed of two main parts, namely, the smart contract and the challenge-response protocol, and present a performance analysis. A prototype of the smart contract is created and deployed on Ethereum’s Testnet blockchain, and the source code is publicly available.", "title": "" }, { "docid": "c4e43160e9c3d4358d03cc32170e6c60", "text": "A cavity-backed dual slant polarized and low mutual coupling antenna array panel with frequency band from 4.9 to 6 GHz is analyzed and realized for the MIMO antenna 5G applications. The beamforming capability of this array is also explored. The printed cross dipoles fed with balun and enclosed in a cavity are used as radiating elements. The two cross dipoles are placed at an angle of 45° and 135° giving slant polarizations. A <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> subarray of dimension <inline-formula> <tex-math notation=\"LaTeX\">$2.8\\lambda \\times 2.8\\lambda \\times 0.26\\lambda $ </tex-math></inline-formula> where <inline-formula> <tex-math notation=\"LaTeX\">$\\lambda $ </tex-math></inline-formula> is free space wavelength at 6 GHz is designed, fabricated, and experimentally verified. It shows good impedance matching, port isolation, envelope correlation coefficient, and radiation characteristics which are desired for MIMO applications. Beamforming capability in the digital domain is verified using the Keysight SystemVue simulation tool for both <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$16\\times 16$ </tex-math></inline-formula> panel arrays which employ measured 3-D embedded element radiation pattern data of the fabricated <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> subarray. Four simultaneous beams using digital beamforming approach are also presented for the <inline-formula> <tex-math notation=\"LaTeX\">$16 \\times 16$ </tex-math></inline-formula> array for multiuser environment base station antenna applications.", "title": "" }, { "docid": "e00c05ab9796c6c217e00695adcb07ac", "text": "Web 2.0 technologies opened up new perspectives in learning and teaching activities. Collaboration, communication and sharing between learners contribute to the self-regulated learning, a bottom-up approach. The market for smartphones and tablets are growing rapidly. They are being used more often in everyday life. This allows us to support self-regulated learning in a way that learning resources and applications are accessible any time and at any place. This publication focuses on the Personal Learning Environment (PLE) that was launched at Graz University of Technology in 2010. After a first prototype a complete redesign was carried out to fulfill a change towards learner-centered framework. Statistical data show a high increase of attractiveness of the whole system in general. As the next step a mobile version is integrated. A converter for browser-based learning apps within PLE to native smartphone apps leads to the Ubiquitous PLE, which is discussed in this paper in detail.", "title": "" }, { "docid": "85e867bd998e9c68540d4a22305d8bab", "text": "Warped Gaussian processes (WGP) [1] model output observations in regression tasks as a parametric nonlinear transformation of a Gaussian process (GP). The use of this nonlinear transformation, which is included as part of the probabilistic model, was shown to enhance performance by providing a better prior model on several data sets. In order to learn its parameters, maximum likelihood was used. In this work we show that it is possible to use a non-parametric nonlinear transformation in WGP and variationally integrate it out. The resulting Bayesian WGP is then able to work in scenarios in which the maximum likelihood WGP failed: Low data regime, data with censored values, classification, etc. We demonstrate the superior performance of Bayesian warped GPs on several real data sets.", "title": "" }, { "docid": "ce096e9ee74932e0e0d04d2638f54d2a", "text": "The Internet of Things is one of the most promising technological developments in information technology. It promises huge financial and nonfinancial benefits across supply chains, in product life cycle and customer relationship applications as well as in smart environments. However, the adoption process of the Internet of Things has been slower than expected. One of the main reasons for this is the missing profitability for each individual stakeholder. Costs and benefits are not equally distributed. Cost benefit sharing models have been proposed to overcome this problem and to enable new areas of application. However, these cost benefit sharing approaches are complex, time consuming, and have failed to achieve broad usage. In this chapter, an alternative concept, suggesting flexible pricing and trading of information, is proposed. On the basis of a beverage supply chain scenario, a prototype installation, based on an open source billing solution and the Electronic Product Code Information Service (EPCIS), is shown as a proof of concept and an introduction to different pricing options. This approach allows a more flexible and scalable solution for cost benefit sharing and may enable new business models for the Internet of Things. University of Bremen, Planning and Control of Production Systems, Germany", "title": "" }, { "docid": "503c9c4d0d8f94d3e7a9ea8ee496e08b", "text": "Memories for context become less specific with time resulting in animals generalizing fear from training contexts to novel contexts. Though much attention has been given to the neural structures that underlie the long-term consolidation of a context fear memory, very little is known about the mechanisms responsible for the increase in fear generalization that occurs as the memory ages. Here, we examine the neural pattern of activation underlying the expression of a generalized context fear memory in male C57BL/6J mice. Animals were context fear conditioned and tested for fear in either the training context or a novel context at recent and remote time points. Animals were sacrificed and fluorescent in situ hybridization was performed to assay neural activation. Our results demonstrate activity of the prelimbic, infralimbic, and anterior cingulate (ACC) cortices as well as the ventral hippocampus (vHPC) underlie expression of a generalized fear memory. To verify the involvement of the ACC and vHPC in the expression of a generalized fear memory, animals were context fear conditioned and infused with 4% lidocaine into the ACC, dHPC, or vHPC prior to retrieval to temporarily inactivate these structures. The results demonstrate that activity of the ACC and vHPC is required for the expression of a generalized fear memory, as inactivation of these regions returned the memory to a contextually precise form. Current theories of time-dependent generalization of contextual memories do not predict involvement of the vHPC. Our data suggest a novel role of this region in generalized memory, which should be incorporated into current theories of time-dependent memory generalization. We also show that the dorsal hippocampus plays a prolonged role in contextually precise memories. Our findings suggest a possible interaction between the ACC and vHPC controls the expression of fear generalization.", "title": "" }, { "docid": "15f7718c561aa3add15e43f1319d4bda", "text": "While there have been significant advances in detecting emotions from speech and image recognition, emotion detection on text is still under-explored and remained as an active research field. This paper introduces a corpus for text-based emotion detection on multiparty dialogue as well as deep neural models that outperform the existing approaches for document classification. We first present a new corpus that provides annotation of seven emotions on consecutive utterances in dialogues extracted from the show, Friends. We then suggest four types of sequence-based convolutional neural network models with attention that leverage the sequence information encapsulated in dialogue. Our best model shows the accuracies of 37.9% and 54% for fineand coarsegrained emotions, respectively. Given the difficulty of this task, this is promising.", "title": "" }, { "docid": "1afdefb31d7b780bb78b59ca8b0d3d8a", "text": "Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.", "title": "" }, { "docid": "840463688f36a5fd14efa8a1a35bfb8e", "text": "In this paper, we propose a new hybrid ant colony optimization (ACO) algorithm for feature selection (FS), called ACOFS, using a neural network. A key aspect of this algorithm is the selection of a subset of salient features of reduced size. ACOFS uses a hybrid search technique that combines the advantages of wrapper and filter approaches. In order to facilitate such a hybrid search, we designed new sets of rules for pheromone update and heuristic information measurement. On the other hand, the ants are guided in correct directions while constructing graph (subset) paths using a bounded scheme in each and every step in the algorithm. The above combinations ultimately not only provide an effective balance between exploration and exploitation of ants in the search, but also intensify the global search capability of ACO for a highquality solution in FS. We evaluate the performance of ACOFS on eight benchmark classification datasets and one gene expression dataset, which have dimensions varying from 9 to 2000. Extensive experiments were conducted to ascertain how AOCFS works in FS tasks. We also compared the performance of ACOFS with the results obtained from seven existing well-known FS algorithms. The comparison details show that ACOFS has a remarkable ability to generate reduced-size subsets of salient features while yielding significant classification accuracy. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9f75be8d3b142b452d09406c39b6a470", "text": "Infants and young children are exposed to a relatively limited range of circumstances that may result in accidental or inflicted asphyxial deaths. These usually involve situations that interfere with oxygen uptake by the blood, or that decrease the amount of circulating oxygen. Typically infants and toddlers asphyxiate in sleeping accidents where they smother when their external airways are covered, hang when clothing is caught on projections inside cots, or wedge when they slip between mattresses and walls. Overlaying may cause asphyxiation due to a combination of airway occlusion and mechanical asphyxia, as may inflicted asphyxia with a pillow. The diagnosis of asphyxiation in infancy is difficult as there are usually no positive findings at autopsy and so differentiating asphyxiation from sudden infant death syndrome (SIDS) based purely on the pathological features will usually not be possible. Similarly, the autopsy findings in inflicted and accidental suffocation will often be identical. Classifications of asphyxia are sometimes confusing as particular types of asphyxiating events may involve several processes and so it may not be possible to precisely compartmentalize a specific incident. For this reason asphyxial events have been classified as being due to: insufficient oxygen availability in the surrounding environment, critical reduction of oxygen transfer from the atmosphere to the blood, impairment of oxygen transport in the circulating blood, or compromise of cellular oxygen uptake. The range of possible findings at the death scene and autopsy are reviewed, and the likelihood of finding markers/indicators of asphyxia is discussed. The conclusion that asphyxiation has occurred often has to be made by integrating aspects of the history, scene, and autopsy, while recognizing that none of these are necessarily pathognomonic, and also by excluding other possibilities. However, even after full investigation a diagnosis of asphyxia may not be possible and a number of issues concerning possible lethal terminal mechanisms may remain unresolved.", "title": "" }, { "docid": "23ba216f846eab3ff8c394ad29b507bf", "text": "The emergence of large-scale freeform shapes in architecture poses big challenges to the fabrication of such structures. A key problem is the approximation of the design surface by a union of patches, so-called panels, that can be manufactured with a selected technology at reasonable cost, while meeting the design intent and achieving the desired aesthetic quality of panel layout and surface smoothness. The production of curved panels is mostly based on molds. Since the cost of mold fabrication often dominates the panel cost, there is strong incentive to use the same mold for multiple panels. We cast the major practical requirements for architectural surface paneling, including mold reuse, into a global optimization framework that interleaves discrete and continuous optimization steps to minimize production cost while meeting user-specified quality constraints. The search space for optimization is mainly generated through controlled deviation from the design surface and tolerances on positional and normal continuity between neighboring panels. A novel 6-dimensional metric space allows us to quickly compute approximate inter-panel distances, which dramatically improves the performance of the optimization and enables the handling of complex arrangements with thousands of panels. The practical relevance of our system is demonstrated by paneling solutions for real, cutting-edge architectural freeform design projects.", "title": "" }, { "docid": "623c78e515abee9830eb0b79e773dcec", "text": "The main focus in this research paper is to experiment deeply with, and find alternative solutions to the image segmentation and character recognition problems within the License Plate Recognition framework. Three main stages are identified in such applications. First, it is necessary to locate and extract the license plate region from a larger scene image. Second, having a license plate region to work with, the alphanumeric characters in the plate need to be extracted from the background. Third, deliver them to an character system (BOX APPROACH)for recognition. In order to identify a vehicle by reading its license plate successfully, it is obviously necessary to locate the plate in the scene image provided by some acquisition system (e.g. video or still camera). Locating the region of interest helps in dramatically reducing both the computational expense and algorithm complexity. For example, a currently common1024x768 resolution image contains a total of 786,432pixels, while the region of interest (in this case a license plate) may account for only 10% of the image area. Also, the input to the following segmentation and recognition stages is simplified, resulting in easier algorithm design and shorter computation times. The paper mainly work with the standard license plates but the techniques, algorithms and parameters that is be used can be adjusted easily for any similar number plates even with other alpha-numeric set.", "title": "" }, { "docid": "8439f9d3e33fdbc43c70f1d46e2e143e", "text": "Redacting text documents has traditionally been a mostly manual activity, making it expensive and prone to disclosure risks. This paper describes a semi-automated system to ensure a specified level of privacy in text data sets. Recent work has attempted to quantify the likelihood of privacy breaches for text data. We build on these notions to provide a means of obstructing such breaches by framing it as a multi-class classification problem. Our system gives users fine-grained control over the level of privacy needed to obstruct sensitive concepts present in that data. Additionally, our system is designed to respect a user-defined utility metric on the data (such as disclosure of a particular concept), which our methods try to maximize while anonymizing. We describe our redaction framework, algorithms, as well as a prototype tool built in to Microsoft Word that allows enterprise users to redact documents before sharing them internally and obscure client specific information. In addition we show experimental evaluation using publicly available data sets that show the effectiveness of our approach against both automated attackers and human subjects.The results show that we are able to preserve the utility of a text corpus while reducing disclosure risk of the sensitive concept.", "title": "" } ]
scidocsrr
7e194a4f02aabbd923bd0f9bb26bb37e
Malicious Behavior Detection using Windows Audit Logs
[ { "docid": "cb2d42347e676950bef013b19c8eef70", "text": "One of the major and serious threats on the Internet today is malicious software, often referred to as a malware. The malwares being designed by attackers are polymorphic and metamorphic which have the ability to change their code as they propagate. Moreover, the diversity and volume of their variants severely undermine the effectiveness of traditional defenses which typically use signature based techniques and are unable to detect the previously unknown malicious executables. The variants of malware families share typical behavioral patterns reflecting their origin and purpose. The behavioral patterns obtained either statically or dynamically can be exploited to detect and classify unknown malwares into their known families using machine learning techniques. This survey paper provides an overview of techniques for analyzing and classifying the malwares.", "title": "" } ]
[ { "docid": "41b2b8623a26ca5aa086d16943dad78d", "text": "Over the last several years, DBSCAN (Density-Based Spatial Clustering of Applications with Noise) has been widely applied in many areas of science due to its simplicity, robustness against noise (outlier) and ability to discover clusters of arbitrary shapes. However, DBSCAN algorithm requires two initial input parameters, namely Eps (the radius of the cluster) and MinPts (the minimum data objects required inside the cluster) which both have a significant influence on the clustering results. Hence, DBSCAN is sensitive to its input parameters and it is hard to determine them a priori. This paper presents an efficient and effective hybrid clustering method, named BDE-DBSCAN, that combines Binary Differential Evolution and DBSCAN algorithm to simultaneously quickly and automatically specify appropriate parameter values for Eps and MinPts. Since the Eps parameter can largely degrades the efficiency of the DBSCAN algorithm, the combination of an analytical way for estimating Eps and Tournament Selection (TS) method is also employed. Experimental results indicate the proposed method is precise in determining appropriate input parameters of DBSCAN algorithm.", "title": "" }, { "docid": "d1c14bf02205c9a37761d56a6d88e01e", "text": "BACKGROUND\nSchizophrenia is a high-cost, chronic, serious mental illness. There is a clear need to improve treatments and expand access to care for persons with schizophrenia, but simple, tailored interventions are missing.\n\n\nOBJECTIVE\nTo evaluate the impact of tailored mobile telephone text messages to encourage adherence to medication and to follow up with people with psychosis at 12 months.\n\n\nMETHODS\nMobile.Net is a pragmatic randomized trial with inpatient psychiatric wards allocated to two parallel arms. The trial will include 24 sites and 45 psychiatric hospital wards providing inpatient care in Finland. The participants will be adult patients aged 18-65 years, of either sex, with antipsychotic medication (Anatomical Therapeutic Chemical classification 2011) on discharge from a psychiatric hospital, who have a mobile phone, are able to use the Finnish language, and are able to give written informed consent to participate in the study. The intervention group will receive semiautomatic system (short message service [SMS]) messages after they have been discharged from the psychiatric hospital. Patients will choose the form, content, timing, and frequency of the SMS messages related to their medication, keeping appointments, and other daily care. SMS messages will continue to the end of the study period (12 months) or until participants no longer want to receive the messages. Patients will be encouraged to contact researchers if they feel that they need to adjust the message in any way. At all times, both groups will receive usual care at the discretion of their team (psychiatry and nursing). The primary outcomes are service use and healthy days by 12 months based on routine data (admission to a psychiatric hospital, time to next hospitalization, time in hospital during this year, and healthy days). The secondary outcomes are service use, coercive measures, medication, adverse events, satisfaction with care, the intervention, and the trial, social functioning, and economic factors. Data will be collected 12 months after baseline. The outcomes are based on the national health registers and patients' subjective evaluations. The primary analysis will be by intention-to-treat.\n\n\nTRIAL REGISTRATION\nInternational Standard Randomised Controlled Trial Number (ISRCTN): 27704027; http://www.controlled-trials.com/ISRCTN27704027 (Archived by WebCite at http://www.webcitation.org/69FkM4vcq).", "title": "" }, { "docid": "4ee078123815eff49cc5d43550021261", "text": "Generalized anxiety and major depression have become increasingly common in the United States, affecting 18.6 percent of the adult population. Mood disorders can be debilitating, and are often correlated with poor general health, life dissatisfaction, and the need for disability benefits due to inability to work. Recent evidence suggests that some mood disorders have a circadian component, and disruptions in circadian rhythms may even trigger the development of these disorders. However, the molecular mechanisms of this interaction are not well understood. Polymorphisms in a circadian clock-related gene, PER3, are associated with behavioral phenotypes (extreme diurnal preference in arousal and activity) and sleep/mood disorders, including seasonal affective disorder (SAD). Here we show that two PER3 mutations, a variable number tandem repeat (VNTR) allele and a single-nucleotide polymorphism (SNP), are associated with diurnal preference and higher Trait-Anxiety scores, supporting a role for PER3 in mood modulation. In addition, we explore a potential mechanism for how PER3 influences mood by utilizing a comprehensive circadian clock model that accurately predicts the changes in circadian period evident in knock-out phenotypes and individuals with PER3-related clock disorders.", "title": "" }, { "docid": "e4d86871669074b385f8ea36968106c0", "text": "Verbal redundancy arises from the concurrent presentation of text and verbatim speech. To inform theories of multimedia learning that guide the design of educational materials, a meta-analysis was conducted to investigate the effects of spoken-only, written-only, and spoken–written presentations on learning retention and transfer. After an extensive search for experimental studies meeting specified inclusion criteria, data from 57 independent studies were extracted. Most of the research participants were postsecondary students. Overall, this meta-analysis revealed that outcomes comparing spoken–written and written-only presentations did not differ, but students who learned from spoken–written presentations outperformed those who learned from spoken-only presentations. This effect was dependent on learners’ prior knowledge, pacing of presentation, and inclusion of animation or diagrams. Specifically, the advantages of spoken–written presentations over spoken-only presentations were found for low prior knowledge learners, system-paced learning materials, and picture-free materials. In comparison with verbatim, spoken–written presentations, presentations displaying key terms extracted from spoken narrations were associated with better learning outcomes and accounted for much of the advantage of spoken–written over spoken-only presentations. These findings have significant implications for the design of multimedia materials.", "title": "" }, { "docid": "892eb8460429e081770ea4dd13d994c7", "text": "Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN’s discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L 1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.", "title": "" }, { "docid": "f3dcf620edb77a199b2ad9d2410cc858", "text": "As the amount of digital data grows, so does the theft of sensitive data through the loss or misplacement of laptops, thumb drives, external hard drives, and other electronic storage media. Sensitive data may also be leaked accidentally due to improper disposal or resale of storage media. To protect the secrecy of the entire data lifetime, we must have confidential ways to store and delete data. This survey summarizes and compares existing methods of providing confidential storage and deletion of data in personal computing environments.", "title": "" }, { "docid": "7fe82f7231235ce6d4b16ec103130156", "text": "Autonomous grasping of household objects is one of the major skills that an intelligent service robot necessarily has to provide in order to interact with the environment. In this paper, we propose a grasping strategy for known objects, comprising an off-line, box-based grasp generation technique on 3D shape representations. The complete system is able to robustly detect an object and estimate its pose, flexibly generate grasp hypotheses from the assigned model and perform such hypotheses using visual servoing. We will present experiments implemented on the humanoid platform ARMAR-III.", "title": "" }, { "docid": "005ba4edd01f604a12b787af1359bbd8", "text": "Sentence ordering is one of important tasks in NLP. Previous works mainly focused on improving its performance by using pair-wise strategy. However, it is nontrivial for pairwise models to incorporate the contextual sentence information. In addition, error prorogation could be introduced by using the pipeline strategy in pair-wise models. In this paper, we propose an end-to-end neural approach to address the sentence ordering problem, which uses the pointer network (Ptr-Net) to alleviate the error propagation problem and utilize the whole contextual information. Experimental results show the effectiveness of the proposed model. Source codes1 and dataset2 of this paper are available.", "title": "" }, { "docid": "8a7bd0858a51380ed002b43b08a1c9f1", "text": "Unbiased language is a requirement for reference sources like encyclopedias and scientific texts. Bias is, nonetheless, ubiquitous, making it crucial to understand its nature and linguistic realization and hence detect bias automatically. To this end we analyze real instances of human edits designed to remove bias from Wikipedia articles. The analysis uncovers two classes of bias: framing bias, such as praising or perspective-specific words, which we link to the literature on subjectivity; and epistemological bias, related to whether propositions that are presupposed or entailed in the text are uncontroversially accepted as true. We identify common linguistic cues for these classes, including factive verbs, implicatives, hedges, and subjective intensifiers. These insights help us develop features for a model to solve a new prediction task of practical importance: given a biased sentence, identify the bias-inducing word. Our linguistically-informed model performs almost as well as humans tested on the same task.", "title": "" }, { "docid": "c2b41a637cdc46abf0e154368a5990df", "text": "Ideally, the time that an incremental algorithm uses to process a change should be a fimction of the size of the change rather than, say, the size of the entire current input. Based o n a formalization of \"the set of things changed\" by an increInental modification, this paper investigates how and to what extent it is possibh~' to give such a guarantee for a chart-ba.se(l parsing frmnework and discusses the general utility of a tninlmality notion in incremental processing) 1 I n t r o d u c t i o n", "title": "" }, { "docid": "95c1eac3e2f814799c9d6a816714213c", "text": "User interfaces for web image search engine results differ significantly from interfaces for traditional (text) web search results, supporting a richer interaction. In particular, users can see an enlarged image preview by hovering over a result image, and an `image preview' page allows users to browse further enlarged versions of the results, and to click-through to the referral page where the image is embedded. No existing work investigates the utility of these interactions as implicit relevance feedback for improving search ranking, beyond using clicks on images displayed in the search results page. In this paper we propose a number of implicit relevance feedback features based on these additional interactions: hover-through rate, 'converted-hover' rate, referral page click through, and a number of dwell time features. Also, since images are never self-contained, but always embedded in a referral page, we posit that clicks on other images that are embedded on the same referral webpage as a given image can carry useful relevance information about that image. We also posit that query-independent versions of implicit feedback features, while not expected to capture topical relevance, will carry feedback about the quality or attractiveness of images, an important dimension of relevance for web image search. In an extensive set of ranking experiments in a learning to rank framework, using a large annotated corpus, the proposed features give statistically significant gains of over 2% compared to a state of the art baseline that uses standard click features.", "title": "" }, { "docid": "243c14b8ea40b697449200627a09a897", "text": "Nowadays there is a lot of effort on the study, analysis and finding of new solutions related to high density sensor networks used as part of the IoT (Internet of Things) concept. LoRa (Long Range) is a modulation technique that enables the long-range transfer of information with a low transfer rate. This paper presents a review of the challenges and the obstacles of IoT concept with emphasis on the LoRa technology. A LoRaWAN network (Long Range Network Protocol) is of the Low Power Wide Area Network (LPWAN) type and encompasses battery powered devices that ensure bidirectional communication. The main contribution of the paper is the evaluation of the LoRa technology considering the requirements of IoT. In conclusion LoRa can be considered a suitable candidate in addressing the IoT challenges.", "title": "" }, { "docid": "b7b3690f547e479627cc1262ae080b8f", "text": "This article investigates the vulnerabilities of Supervisory Control and Data Acquisition (SCADA) systems which monitor and control the modern day irrigation canal systems. This type of monitoring and control infrastructure is also common for many other water distribution systems. We present a linearized shallow water partial differential equation (PDE) system that can model water flow in a network of canal pools which are equipped with lateral offtakes for water withdrawal and are connected by automated gates. The knowledge of the system dynamics enables us to develop a deception attack scheme based on switching the PDE parameters and proportional (P) boundary control actions, to withdraw water from the pools through offtakes. We briefly discuss the limits on detectability of such attacks. We use a known formulation based on low frequency approximation of the PDE model and an associated proportional integral (PI) controller, to create a stealthy deception scheme capable of compromising the performance of the closed-loop system. We test the proposed attack scheme in simulation, using a shallow water solver; and show that the attack is indeed realizable in practice by implementing it on a physical canal in Southern France: the Gignac canal. A successful field experiment shows that the attack scheme enables us to steal water stealthily from the canal until the end of the attack.", "title": "" }, { "docid": "af0328c3a271859d31c0e3993db7105e", "text": "The increasing bandwidth demand in data centers and telecommunication infrastructures had prompted new electrical interface standards capable of operating up to 56Gb/s per-lane. The CEI-56G-VSR-PAM4 standard [1] defines PAM-4 signaling at 56Gb/s targeting chip-to-module interconnect. Figure 6.3.1 shows the measured S21 of a channel resembling such interconnects and the corresponding single-pulse response after TX-FIR and RX CTLE. Although the S21 is merely ∼10dB at 14GHz, the single-pulse response exhibits significant reflections from impedance discontinuities, mainly between package and PCB traces. These reflections are detrimental to PAM-4 signaling and cannot be equalized effectively by RX CTLE and/or a few taps of TX feed-forward equalization. This paper presents the design of a PAM-4 receiver using 10-tap direct decision-feedback equalization (DFE) targeting such VSR channels.", "title": "" }, { "docid": "e2e961e43be79f61e57068faed2e7ca9", "text": "A new type of one-dimensional leaky-wave antenna (LWA) with independent control of the beam-pointing angle and beamwidth is presented. The antenna is based on a simple structure composed of a bulk parallel-plate waveguide (PPW) loaded with two printed circuit boards (PCBs), each one consisting of an array of printed dipoles. One PCB acts as a partially reflective surface (PRS), and the other grounded PCB behaves as a high impedance surface (HIS). It is shown that an independent control of the leaky-mode phase and leakage rate can be achieved by changing the lengths of the PRS and HIS dipoles, thus resulting in a flexible adjustment of the LWA pointing direction and directivity. The leaky-mode dispersion curves are obtained with a simple Transverse Equivalent Network (TEN), and they are validated with three-dimensional full-wave simulations. Experimental results on fabricated prototypes operating at 15 GHz are reported, demonstrating the versatile and independent control of the LWA performance by changing the PRS and HIS parameters.", "title": "" }, { "docid": "51a67685249e0108c337d53b5b1c7c92", "text": "CONTEXT\nEvidence suggests that early adverse experiences play a preeminent role in development of mood and anxiety disorders and that corticotropin-releasing factor (CRF) systems may mediate this association.\n\n\nOBJECTIVE\nTo determine whether early-life stress results in a persistent sensitization of the hypothalamic-pituitary-adrenal axis to mild stress in adulthood, thereby contributing to vulnerability to psychopathological conditions.\n\n\nDESIGN AND SETTING\nProspective controlled study conducted from May 1997 to July 1999 at the General Clinical Research Center of Emory University Hospital, Atlanta, Ga.\n\n\nPARTICIPANTS\nForty-nine healthy women aged 18 to 45 years with regular menses, with no history of mania or psychosis, with no active substance abuse or eating disorder within 6 months, and who were free of hormonal and psychotropic medications were recruited into 4 study groups (n = 12 with no history of childhood abuse or psychiatric disorder [controls]; n = 13 with diagnosis of current major depression who were sexually or physically abused as children; n = 14 without current major depression who were sexually or physically abused as children; and n = 10 with diagnosis of current major depression and no history of childhood abuse).\n\n\nMAIN OUTCOME MEASURES\nAdrenocorticotropic hormone (ACTH) and cortisol levels and heart rate responses to a standardized psychosocial laboratory stressor compared among the 4 study groups.\n\n\nRESULTS\nWomen with a history of childhood abuse exhibited increased pituitary-adrenal and autonomic responses to stress compared with controls. This effect was particularly robust in women with current symptoms of depression and anxiety. Women with a history of childhood abuse and a current major depression diagnosis exhibited a more than 6-fold greater ACTH response to stress than age-matched controls (net peak of 9.0 pmol/L [41.0 pg/mL]; 95% confidence interval [CI], 4.7-13.3 pmol/L [21.6-60. 4 pg/mL]; vs net peak of 1.4 pmol/L [6.19 pg/mL]; 95% CI, 0.2-2.5 pmol/L [1.0-11.4 pg/mL]; difference, 8.6 pmol/L [38.9 pg/mL]; 95% CI, 4.6-12.6 pmol/L [20.8-57.1 pg/mL]; P<.001).\n\n\nCONCLUSIONS\nOur findings suggest that hypothalamic-pituitary-adrenal axis and autonomic nervous system hyperreactivity, presumably due to CRF hypersecretion, is a persistent consequence of childhood abuse that may contribute to the diathesis for adulthood psychopathological conditions. Furthermore, these results imply a role for CRF receptor antagonists in the prevention and treatment of psychopathological conditions related to early-life stress. JAMA. 2000;284:592-597", "title": "" }, { "docid": "5f70d96454e4a6b8d2ce63bc73c0765f", "text": "The Natural Language Processing group at the University of Szeged has been involved in human language technology research since 1998, and by now, it has become one of the leading workshops of Hungarian computational linguistics. Both computer scientists and linguists enrich the team with their knowledge, moreover, MSc and PhD students are also involved in research activities. The team has gained expertise in the fields of information extraction, implementing basic language processing toolkits and creating language resources. The Group is primarily engaged in processing Hungarian and English texts and its general objective is to develop language-independent or easily adaptable technologies. With the creation of the manually annotated Szeged Corpus and TreeBank, as well as the Hungarian WordNet, SzegedNE and other corpora it has become possible to apply machine learning based methods for the syntactic and semantic analysis of Hungarian texts, which is one of the strengths of the group. They have also implemented novel solutions for the morphological and syntactic parsing of morphologically rich languages and they have also published seminal papers on computational semantics, i.e. uncertainty detection and multiword expressions. They have developed tools for basic linguistic processing of Hungarian, for named entity recognition and for keyphrase extraction, which can all be easily integrated into large-scale systems and are optimizable for the specific needs of the given application. Currently, the group’s research activities focus on the processing of non-canonical texts (e.g. social media texts) and on the implementation of a syntactic parser for Hungarian, among others.", "title": "" }, { "docid": "084079278051a0be5ab44aaa433ae37f", "text": "Primary lymphomas of the heart are extremely rare, accounting for 2% of all primary cardiac tumors. Due to the rare presentation, there is no proper consensus available on treatment strategy. Preoperative confirmation of the pathology is fundamental in guiding an early treatment plan, which allows for improved prognosis. Unfortunately, in most cases, primary cardiac lymphoma is only identified on postoperative histopathological analyses, which affect the treatment plan and outcome. Here, we report a unique case of primary cardiac lymphoma presented with dyspnea and reduced effort tolerance. Young age, rapid onset of symptom, and absence of cardiac risk factors prompted us towards further imaging and emergency resection. The patient received a course of postoperative chemotherapy and was disease-free on six months of follow-up.", "title": "" }, { "docid": "6b69666df7a0fcb288acce4c7ff5b77d", "text": "In this paper, a new classification method for enhancing the performance of K-Nearest Neighbor is proposed which uses robust neighbors in training data. This new classification method is called Modified K-Nearest Neighbor, MKNN. Inspired the traditional KNN algorithm, the main idea is classifying the test samples according to their neighbor tags. This method is a kind of weighted KNN so that these weights are determined using a different procedure. The procedure computes the fraction of the same labeled neighbors to the total number of neighbors. The proposed method is evaluated on five different data sets. Experiments show the excellent improvement in accuracy in comparison with KNN method.", "title": "" }, { "docid": "56321ec6dfc3d4c55fc99125e942cf44", "text": "The last decade has seen a substantial body of literature on the recognition of emotion from speech. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead a multiplicity of evaluation strategies employed – such as cross-validation or percentage splits without proper instance definition – prevents exact reproducibility. Further, in order to face more realistic scenarios, the community is in desperate need of more spontaneous and less prototypical data. This INTERSPEECH 2009 Emotion Challenge aims at bridging such gaps between excellent research on human emotion recognition from speech and low compatibility of results. The FAU Aibo Emotion Corpus [1] serves as basis with clearly defined test and training partitions incorporating speaker independence and different room acoustics as needed in most reallife settings. This paper introduces the challenge, the corpus, the features, and benchmark results of two popular approaches towards emotion recognition from speech.", "title": "" } ]
scidocsrr
fa4966dd0e15c3d84cdbceadff25868f
Just Say NO to Paxos Overhead: Replacing Consensus with Network Ordering
[ { "docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0", "text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.", "title": "" }, { "docid": "558082c8d15613164d586cab0ba04d9c", "text": "One of the potential benefits of distributed systems is their use in providing highly-available services that are likely to be usable when needed. Availabilay is achieved through replication. By having inore than one copy of information, a service continues to be usable even when some copies are inaccessible, for example, because of a crash of the computer where a copy was stored. This paper presents a new replication algorithm that has desirable performance properties. Our approach is based on the primary copy technique. Computations run at a primary. which notifies its backups of what it has done. If the primary crashes, the backups are reorganized, and one of the backups becomes the new primary. Our method works in a general network with both node crashes and partitions. Replication causes little delay in user computations and little information is lost in a reorganization; we use a special kind of timestamp called a viewstamp to detect lost information.", "title": "" } ]
[ { "docid": "f10b3f34e63f1c8a1cba703b62cc1043", "text": "BACKGROUND\nDespite the increasing use of very low carbohydrate ketogenic diets (VLCKD) in weight control and management of the metabolic syndrome there is a paucity of research about effects of VLCKD on sport performance. Ketogenic diets may be useful in sports that include weight class divisions and the aim of our study was to investigate the influence of VLCKD on explosive strength performance.\n\n\nMETHODS\n8 athletes, elite artistic gymnasts (age 20.9 ± 5.5 yrs) were recruited. We analyzed body composition and various performance aspects (hanging straight leg raise, ground push up, parallel bar dips, pull up, squat jump, countermovement jump, 30 sec continuous jumps) before and after 30 days of a modified ketogenic diet. The diet was based on green vegetables, olive oil, fish and meat plus dishes composed of high quality protein and virtually zero carbohydrates, but which mimicked their taste, with the addition of some herbal extracts. During the VLCKD the athletes performed the normal training program. After three months the same protocol, tests were performed before and after 30 days of the athletes' usual diet (a typically western diet, WD). A one-way Anova for repeated measurements was used.\n\n\nRESULTS\nNo significant differences were detected between VLCKD and WD in all strength tests. Significant differences were found in body weight and body composition: after VLCKD there was a decrease in body weight (from 69.6 ± 7.3 Kg to 68.0 ± 7.5 Kg) and fat mass (from 5.3 ± 1.3 Kg to 3.4 ± 0.8 Kg p < 0.001) with a non-significant increase in muscle mass.\n\n\nCONCLUSIONS\nDespite concerns of coaches and doctors about the possible detrimental effects of low carbohydrate diets on athletic performance and the well known importance of carbohydrates there are no data about VLCKD and strength performance. The undeniable and sudden effect of VLCKD on fat loss may be useful for those athletes who compete in sports based on weight class. We have demonstrated that using VLCKD for a relatively short time period (i.e. 30 days) can decrease body weight and body fat without negative effects on strength performance in high level athletes.", "title": "" }, { "docid": "6ff0c491facce9ccfbf8465211f78c42", "text": "Users leave digital footprints when interacting with various music streaming services. Music play sequence, which contains rich information about personal music preference and song similarity, has been largely ignored in previous music recommender systems. In this paper, we explore the effects of music play sequence on developing effective personalized music recommender systems. Towards the goal, we propose to use word embedding techniques in music play sequences to estimate the similarity between songs. The learned similarity is then embedded into matrix factorization to boost the latent feature learning and discovery. Furthermore, the proposed method only considers the knearest songs (e.g., k = 5) in the learning process and thus avoids the increase of time complexity. Experimental results on two public datasets demonstrate that our methods could significantly improve the performance on both rating prediction and topn recommendation tasks.", "title": "" }, { "docid": "cff3b4f6db26e66893a9db95fb068ef1", "text": "In this paper, we consider the task of text categorization as a graph classification problem. By representing textual documents as graph-of-words instead of historical n-gram bag-of-words, we extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining. Moreover, by capitalizing on the concept of k-core, we reduce the graph representation to its densest part – its main core – speeding up the feature extraction step for little to no cost in prediction performances. Experiments on four standard text classification datasets show statistically significant higher accuracy and macro-averaged F1-score compared to baseline approaches.", "title": "" }, { "docid": "f5a4d05c8b8c42cdca540794000afad5", "text": "Design thinking (DT) is regarded as a system of three overlapping spaces—viability, desirability, and feasibility—where innovation increases when all three perspectives are addressed. Understanding how innovation within teams can be supported by DT methods and tools captivates the interest of business communities. This paper aims to examine how DT methods and tools foster innovation in teams. A case study approach, based on two workshops, examined three DT methods with a software tool. The findings support the use of DT methods and tools as a way of incubating ideas and creating innovative solutions within teams when team collaboration and software limitations are balanced. The paper proposes guidelines for utilizing DT methods and tools in innovation", "title": "" }, { "docid": "3817cbe08b92d780fb0c462ec5f359ce", "text": "Stability is an important yet under-addressed issue in feature selection from high-dimensional and small sample data. In this paper, we show that stability of feature selection has a strong dependency on sample size. We propose a novel framework for stable feature selection which first identifies consensus feature groups from subsampling of training samples, and then performs feature selection by treating each consensus feature group as a single entity. Experiments on both synthetic and real-world data sets show that an algorithm developed under this framework is effective at alleviating the problem of small sample size and leads to more stable feature selection results and comparable or better generalization performance than state-of-the-art feature selection algorithms. Synthetic data sets and algorithm source code are available at http://www.cs.binghamton.edu/~lyu/KDD09/.", "title": "" }, { "docid": "8c26160ffaf586eb548325d143cc44b6", "text": "Distributed in-memory key-value stores (KVSs), such as memcached, have become a critical data serving layer in modern Internet-oriented data center infrastructure. Their performance and efficiency directly affect the QoS of web services and the efficiency of data centers. Traditionally, these systems have had significant overheads from inefficient network processing, OS kernel involvement, and concurrency control. Two recent research thrusts have focused on improving key-value performance. Hardware-centric research has started to explore specialized platforms including FPGAs for KVSs; results demonstrated an order of magnitude increase in throughput and energy efficiency over stock memcached. Software-centric research revisited the KVS application to address fundamental software bottlenecks and to exploit the full potential of modern commodity hardware; these efforts also showed orders of magnitude improvement over stock memcached.\n We aim at architecting high-performance and efficient KVS platforms, and start with a rigorous architectural characterization across system stacks over a collection of representative KVS implementations. Our detailed full-system characterization not only identifies the critical hardware/software ingredients for high-performance KVS systems but also leads to guided optimizations atop a recent design to achieve a record-setting throughput of 120 million requests per second (MRPS) (167MRPS with client-side batching) on a single commodity server. Our system delivers the best performance and energy efficiency (RPS/watt) demonstrated to date over existing KVSs including the best-published FPGA-based and GPU-based claims. We craft a set of design principles for future platform architectures, and via detailed simulations demonstrate the capability of achieving a billion RPS with a single server constructed following our principles.", "title": "" }, { "docid": "166a0aaa57fb6d7297f1c604f4a1caa8", "text": "Neural networks designed for the task of classification have become a commodity in recent years. Many works target the development of better networks, which results in a complexification of their architectures with more layers, multiple sub-networks, or even the combination of multiple classifiers. In this paper, we show how to redesign a simple network to reach excellent performances, which are better than the results reproduced with CapsNet on several datasets, by replacing a layer with a Hit-or-Miss layer. This layer contains activated vectors, called capsules, that we train to hit or miss a central capsule by tailoring a specific centripetal loss function. We also show how our network, named HitNet, is capable of synthesizing a representative sample of the images of a given class by including a reconstruction network. This possibility allows to develop a data augmentation step combining information from the data space and the feature space, resulting in a hybrid data augmentation process. In addition, we introduce the possibility for HitNet, to adopt an alternative to the true target when needed by using the new concept of ghost capsules, which is used here to detect potentially mislabeled images in the training data.", "title": "" }, { "docid": "bd3ba8635a8cd2112a1de52c90e2a04b", "text": "Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.", "title": "" }, { "docid": "84c2fb86faf5dbe1ee8a4da557069c09", "text": "Far (extrapersonal) and near (peripersonal) spaces are behaviorally defined as the space outside the hand-reaching distance and the space within the hand-reaching distance. Animal and human studies have confirmed this distinction, showing that space is not homogeneously represented in the brain. In this paper we demonstrate that the coding of space as far and near is not only determined by the hand-reaching distance, but it is also dependent on how the brain represents the extension of the body space. We will show that when the cerebral representation of body space is extended to include objects or tools used by the subject, space previously mapped as far can be remapped as near. Patient P.P., after a right hemisphere stroke, showed a dissociation between near and far spaces in the manifestation of neglect. Indeed, in a line bisection task, neglect was apparent in near space, but not in far space when bisection in the far space was performed with a projection lightpen. However, when in the far space bisection was performed with a stick, used by the patient to reach the line, neglect appeared and was as severe as neglect in the near space. An artificial extension of the patient's body (the stick) caused a remapping of far space as near space.", "title": "" }, { "docid": "085ec38c3e756504be93ac0b94483cea", "text": "Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.", "title": "" }, { "docid": "ae95673f736e76b4089ba839b19925de", "text": "Cloud computing is emerging as a promising field offering a variety of computing services to end users. These services are offered at different prices using various pricing schemes and techniques. End users will favor the service provider offering the best QoS with the lowest price. Therefore, applying a fair pricing model will attract more customers and achieve higher revenues for service providers. This work focuses on comparing many employed and proposed pricing models techniques and highlights the pros and cons of each. The comparison is based on many aspects such as fairness, pricing approach, and utilization period. Such an approach provides a solid ground for designing better models in the future. We have found that most approaches are theoretical and not implemented in the real market, although their simulation results are very promising. Moreover, most of these approaches are biased toward the service provider.", "title": "" }, { "docid": "0cf1f63fd39c8c74465fad866958dac6", "text": "Software development organizations that have been employing capability maturity models, such as SW-CMM or CMMI for improving their processes are now increasingly interested in the possibility of adopting agile development methods. In the context of project management, what can we say about Scrum’s alignment with CMMI? The aim of our paper is to present the mapping between CMMI and the agile method Scrum, showing major gaps between them and identifying how organizations are adopting complementary practices in their projects to make these two approaches more compliant. This is useful for organizations that have a plan-driven process based on the CMMI model and are planning to improve the agility of processes or to help organizations to define a new project management framework based on both CMMI and Scrum practices.", "title": "" }, { "docid": "d655222bf22e35471b18135b67326ac5", "text": "In this paper we approach the robust motion planning problem through the lens of perception-aware planning, whereby we seek a low-cost motion plan subject to a separate constraint on perception localization quality. To solve this problem we introduce the Multiobjective Perception-Aware Planning (MPAP) algorithm which explores the state space via a multiobjective search, considering both cost and a perception heuristic. This perception-heuristic formulation allows us to both capture the history dependence of localization drift and represent complex modern perception methods. The solution trajectory from this heuristic-based search is then certified via Monte Carlo methods to be robust. The additional computational burden of perception-aware planning is offset through massive parallelization on a GPU. Through numerical experiments the algorithm is shown to find robust solutions in about a second. Finally, we demonstrate MPAP on a quadrotor flying perceptionaware and perception-agnostic plans using Google Tango for localization, finding the quadrotor safely executes the perception-aware plan every time, while crashing over 20% of the time on the perception-agnostic due to loss of localization.", "title": "" }, { "docid": "6d61da17db5c16611409356bd79006c4", "text": "We examine empirical evidence for religious prosociality, the hypothesis that religions facilitate costly behaviors that benefit other people. Although sociological surveys reveal an association between self-reports of religiosity and prosociality, experiments measuring religiosity and actual prosocial behavior suggest that this association emerges primarily in contexts where reputational concerns are heightened. Experimentally induced religious thoughts reduce rates of cheating and increase altruistic behavior among anonymous strangers. Experiments demonstrate an association between apparent profession of religious devotion and greater trust. Cross-cultural evidence suggests an association between the cultural presence of morally concerned deities and large group size in humans. We synthesize converging evidence from various fields for religious prosociality, address its specific boundary conditions, and point to unresolved questions and novel predictions.", "title": "" }, { "docid": "ee65f0f456d4d229674d3b0bf4f67ca9", "text": "A push-pull transient current feedforward driver is designed to have a complete push-pull function and loop gain control that enhances the data current drivability. The sink and source current capability of the proposed driver makes it insensitive to the initial voltage levels on the data lines and provides a reduced settling time. The gain control in the positive feedback loop offers a fast settling time without ringing over the complete range of pixel drive currents. The data driver exhibits a settling time of better than 6 μs for drive currents from 20 nA to 5 μA into an equivalent full-HD AMOLED display panel parasitic load of 4 kΩ series resistance and 90 pF shunt capacitance. The driver consumes a static current of 4.5 μA/channel.", "title": "" }, { "docid": "01ba4d36dd05cb533e5ff1ea462888d6", "text": "Against a backdrop of serious corporate and mutual fund scandals, governmental bodies, institutional and private investors have demanded more effective corporate governance structures procedures and systems. The compliance function is now an integral part of corporate policy and practice. This paper presents the findings from a longitudinal qualitative research study on the introduction of an IT-based investment management system at four client sites. Using institutional theory to analyze our data, we find the process of institutionalization follows a non-linear pathway where regulative, normative and cultural forces within the investment management industry produce conflicting organizational behaviours and outcomes.", "title": "" }, { "docid": "69ddedba98e93523f698529716cf2569", "text": "A fast and scalable graph processing method becomes increasingly important as graphs become popular in a wide range of applications and their sizes are growing rapidly. Most of distributed graph processing methods require a lot of machines equipped with a total of thousands of CPU cores and a few terabyte main memory for handling billion-scale graphs. Meanwhile, GPUs could be a promising direction toward fast processing of large-scale graphs by exploiting thousands of GPU cores. All of the existing methods using GPUs, however, fail to process large-scale graphs that do not fit in main memory of a single machine. Here, we propose a fast and scalable graph processing method GTS that handles even RMAT32 (64 billion edges) very efficiently only by using a single machine. The proposed method stores graphs in PCI-E SSDs and executes a graph algorithm using thousands of GPU cores while streaming topology data of graphs to GPUs via PCI-E interface. GTS is fast due to no communication overhead and scalable due to no data duplication from graph partitioning among machines. Through extensive experiments, we show that GTS consistently and significantly outperforms the major distributed graph processing methods, GraphX, Giraph, and PowerGraph, and the state-of-the-art GPU-based method TOTEM.", "title": "" }, { "docid": "74de5693ada4c4ce9ba327deda8d67a2", "text": "As a result of globalization and climate change, Dirofilaria immitis and Dirofilaria repens, the causative agents of dirofilariosis in Europe, continue to spread from endemic areas in the Mediterranean to northern and northeastern regions of Europe where autochthonous cases of dirofilarial infections have increasingly been observed in dogs and humans. Whilst D. repens was recently reported from mosquitoes in putatively non-endemic areas, D. immitis has never been demonstrated in mosquitoes from Europe outside the Mediterranean. From 2011 to 2013, mosquitoes collected within the framework of a German national mosquito monitoring programme were screened for filarial nematodes using a newly designed filarioid-specific real-time PCR assay. Positive samples were further processed by conventional PCR amplification of the cytochrome c oxidase subunit I (COI) gene, amplicons were sequenced and sequences blasted against GenBank. Approximately 17,000 female mosquitoes were subjected to filarial screening. Out of 955 pools examined, nine tested positive for filariae. Two of the COI sequences indicated D. immitis, one D. repens and four Setaria tundra. Two sequences could not be assigned to a known species due to a lack of similar GenBank entries. Whilst D. immitis and the unknown parasites were detected in Culex pipiens/torrentium, D. repens was found in a single Anopheles daciae and all S. tundra were demonstrated in Aedes vexans. All positive mosquitoes were collected between mid-June and early September. The finding of dirofilariae in German mosquitoes implies the possibility of a local natural transmission cycle. While the routes of introduction to Germany and the origin of the filariae cannot be determined retrospectively, potential culicid vectors and reservoir hosts must prospectively be identified and awareness among physicians, veterinarians and public health personnel be created. The health impact of S. tundra on the indigenous cervid fauna needs further investigation.", "title": "" }, { "docid": "657087aaadc0537e9fb19c422c27b485", "text": "Swarms of embedded devices provide new challenges for privacy and security. We propose Permissioned Blockchains as an effective way to secure and manage these systems of systems. A long view of blockchain technology yields several requirements absent in extant blockchain implementations. Our approach to Permissioned Blockchains meets the fundamental requirements for longevity, agility, and incremental adoption. Distributed Identity Management is an inherent feature of our Permissioned Blockchain and provides for resilient user and device identity and attribute management.", "title": "" }, { "docid": "2f0d6b9bee323a75eea3d15a3cabaeb6", "text": "OBJECTIVE\nThis article reviews the mechanisms and pathophysiology of traumatic brain injury (TBI).\n\n\nMETHODS\nResearch on the pathophysiology of diffuse and focal TBI is reviewed with an emphasis on damage that occurs at the cellular level. The mechanisms of injury are discussed in detail including the factors and time course associated with mild to severe diffuse injury as well as the pathophysiology of focal injuries. Examples of electrophysiologic procedures consistent with recent theory and research evidence are presented.\n\n\nRESULTS\nAcceleration/deceleration (A/D) forces rarely cause shearing of nervous tissue, but instead, initiate a pathophysiologic process with a well defined temporal progression. The injury foci are considered to be diffuse trauma to white matter with damage occurring at the superficial layers of the brain, and extending inward as A/D forces increase. Focal injuries result in primary injuries to neurons and the surrounding cerebrovasculature, with secondary damage occurring due to ischemia and a cytotoxic cascade. A subset of electrophysiologic procedures consistent with current TBI research is briefly reviewed.\n\n\nCONCLUSIONS\nThe pathophysiology of TBI occurs over time, in a pattern consistent with the physics of injury. The development of electrophysiologic procedures designed to detect specific patterns of change related to TBI may be of most use to the neurophysiologist.\n\n\nSIGNIFICANCE\nThis article provides an up-to-date review of the mechanisms and pathophysiology of TBI and attempts to address misconceptions in the existing literature.", "title": "" } ]
scidocsrr
d9442d335ee8915fb512edde6eb08e42
An Analysis of Attacks on Blockchain Consensus
[ { "docid": "ed447f3f4bbe8478e9e1f3c4593dbf1b", "text": "We revisit the fundamental question of Bitcoin's security against double spending attacks. While previous work has bounded the probability that a transaction is reversed, we show that no such guarantee can be effectively given if the attacker can choose when to launch the attack. Other approaches that bound the cost of an attack have erred in considering only limited attack scenarios, and in fact it is easy to show that attacks may not cost the attacker at all. We therefore provide a different interpretation of the results presented in previous papers and correct them in several ways. We provide different notions of the security of transactions that provide guarantees to different classes of defenders: merchants who regularly receive payments, miners, and recipients of large one-time payments. We additionally consider an attack that can be launched against lightweight clients, and show that these are less secure than their full node counterparts and provide the right strategy for defenders in this case as well. Our results, overall, improve the understanding of Bitcoin's security guarantees and provide correct bounds for those wishing to safely accept transactions.", "title": "" }, { "docid": "4c452777d851a3d0759cb7b28ee8c53a", "text": "This paper presents TumbleBit, a new unidirectional unlinkable payment hub that is fully compatible with today’s Bitcoin protocol. TumbleBit allows parties to make fast, anonymous, off-blockchain payments through an untrusted intermediary called the Tumbler. TumbleBit’s anonymity properties are similar to classic Chaumian eCash: no one, not even the Tumbler, can link a payment from its payer to its payee. Every payment made via TumbleBit is backed by bitcoins, and comes with a guarantee that Tumbler can neither violate anonymity, nor steal bitcoins, nor “print money” by issuing payments to itself. We prove the security of TumbleBit using the real/ideal world paradigm and the random oracle model. Security follows from the standard RSA assumption and ECDSA unforgeability. We implement TumbleBit, mix payments from 800 users and show that TumbleBit’s offblockchain payments can complete in seconds.", "title": "" } ]
[ { "docid": "6ff0c491facce9ccfbf8465211f78c42", "text": "Users leave digital footprints when interacting with various music streaming services. Music play sequence, which contains rich information about personal music preference and song similarity, has been largely ignored in previous music recommender systems. In this paper, we explore the effects of music play sequence on developing effective personalized music recommender systems. Towards the goal, we propose to use word embedding techniques in music play sequences to estimate the similarity between songs. The learned similarity is then embedded into matrix factorization to boost the latent feature learning and discovery. Furthermore, the proposed method only considers the knearest songs (e.g., k = 5) in the learning process and thus avoids the increase of time complexity. Experimental results on two public datasets demonstrate that our methods could significantly improve the performance on both rating prediction and topn recommendation tasks.", "title": "" }, { "docid": "0575675618e2f2325b8e398a26291611", "text": "We address the problem of temporal action localization in videos. We pose action localization as a structured prediction over arbitrary-length temporal windows, where each window is scored as the sum of frame-wise classification scores. Additionally, our model classifies the start, middle, and end of each action as separate components, allowing our system to explicitly model each actions temporal evolution and take advantage of informative temporal dependencies present in that structure. In this framework, we localize actions by searching for the structured maximal sum, a problem for which we develop a novel, provably-efficient algorithmic solution. The frame-wise classification scores are computed using features from a deep Convolutional Neural Network (CNN), which are trained end-to-end to directly optimize for a novel structured objective. We evaluate our system on the THUMOS 14 action detection benchmark and achieve competitive performance.", "title": "" }, { "docid": "42faf2c0053c9f6a0147fc66c8e4c122", "text": "IN 1921, Gottlieb's discovery of the epithelial attachment of the gingiva opened new horizons which served as the basis for a better understanding of the biology of the dental supporting tissues in health and disease. Three years later his pupils, Orban and Kohler (1924), undertook the task of measuring the epithelial attachment as well as the surrounding tissue relations during the four phases of passive eruption of the tooth. Gottlieb and Orban's descriptions of the epithelial attachment unveiled the exact morphology of this epithelial structure, and clarified the relation of this", "title": "" }, { "docid": "c4f706ff9ceb514e101641a816ba7662", "text": "Open set recognition problems exist in many domains. For example in security, new malware classes emerge regularly; therefore malware classi€cation systems need to identify instances from unknown classes in addition to discriminating between known classes. In this paper we present a neural network based representation for addressing the open set recognition problem. In this representation instances from the same class are close to each other while instances from di‚erent classes are further apart, resulting in statistically signi€cant improvement when compared to other approaches on three datasets from two di‚erent domains.", "title": "" }, { "docid": "ffb1c33b99c37de4dd459637c4e28fae", "text": "This paper presents a method for the measurement of the inductances Ld and Lq of the synchronous reluctance machine in all operating points with consideration of saturation and cross coupling. The measured inductances are used to calculate the currents necessary for optimum torque development in base speed and field weakening range. The implementation of these characteristics in a rotor-oriented control scheme and practical results are demonstrated. Additionally the measured characteristics are used for the development of a new parameter-based sensorless control scheme.", "title": "" }, { "docid": "095c796491edf050dc372799ae82b3d3", "text": "Networks evolve continuously over time with the addition, deletion, and changing of links and nodes. Although many networks contain this type of temporal information, the majority of research in network representation learning has focused on static snapshots of the graph and has largely ignored the temporal dynamics of the network. In this work, we describe a general framework for incorporating temporal information into network embedding methods. The framework gives rise to methods for learning time-respecting embeddings from continuous-time dynamic networks. Overall, the experiments demonstrate the effectiveness of the proposed framework and dynamic network embedding approach as it achieves an average gain of 11.9% across all methods and graphs. The results indicate that modeling temporal dependencies in graphs is important for learning appropriate and meaningful network representations.", "title": "" }, { "docid": "98b30c5056d33f4f92bedc4f2e2698ce", "text": "We present an approach for classifying images of charts based on the shape and spatial relationships of their primitives. Five categories are considered: bar-charts, curve-plots, pie-charts, scatter-plots and surface-plots. We introduce two novel features to represent the structural information based on (a) region segmentation and (b) curve saliency. The local shape is characterized using the Histograms of Oriented Gradients (HOG) and the Scale Invariant Feature Transform (SIFT) descriptors. Each image is represented by sets of feature vectors of each modality. The similarity between two images is measured by the overlap in the distribution of the features -measured using the Pyramid Match algorithm. A test image is classified based on its similarity with training images from the categories. The approach is tested with a database of images collected from the Internet.", "title": "" }, { "docid": "161e66a9e10df9c31b5920788ad8e791", "text": "Our goal is to develop a compositional real-time scheduling framework so that global (system-level) timing properties can be established by composing independently (specified and) analyzed local (component-level) timing properties. The two essential problems in developing such a framework are: (1) to abstract the collective real-time requirements of a component as a single real-time requirement and (2) to compose the component demand abstraction results into the system-level real-time requirement. In our earlier work, we addressed the problems using the Liu and Layland periodic model. In this paper, we address the problems using another well-known model, a bounded-delay resource partition model, as a solution model to the problems. To extend our framework to this model, we develop an exact feasibility condition for a set of bounded-delay tasks over a bounded-delay resource partition. In addition, we present simulation results to evaluate the overheads that the component demand abstraction results incur in terms of utilization increase. We also present utilization bound results on a bounded-delay resource model.", "title": "" }, { "docid": "f70ed7588685ed5ff1be5fa1b03a380e", "text": "BACKGROUND\nAcute diarrhoea is a frequent health problem in both travellers and residents that has a social and economic impact. This study compared the efficacy and tolerability of two loperamide-simeticone formulations and a Saccharomyces boulardii capsule as symptomatic treatment.\n\n\nMETHODS\nThis was a prospective, randomised, single (investigator)-blind, three-arm, parallel group, non-inferiority clinical trial in adult subjects with acute diarrhoea at clinics in Mexico and India, with allocation to a loperamide-simeticone 2/125 mg caplet or chewable tablet (maximum eight in 48 h) or S. boulardii (250 mg twice daily for 5 days).\n\n\nOUTCOME MEASURES\nThe primary outcome measure was the number of unformed stools between 0 and 24 h following the initial dose of study medication (NUS 0-24). The secondary outcome measures were time to last unformed stool (TLUS), time to complete relief of diarrhoea (TCRD), time to complete relief of abdominal discomfort (TCRAD) and the subject's evaluation of treatment effectiveness. Follow-up endpoints at 7 days were feeling of complete wellness; stool passed since final study visit; and continued or recurrent diarrhoea.\n\n\nSUBJECTS\nIn this study, 415 subjects were randomised to either a loperamide-simeticone caplet (n = 139), loperamide-simeticone chewable tablet (n = 139) or S. boulardii capsule (n = 137) and were included in the intention-to-treat analysis.\n\n\nRESULTS\nWith regards to mean NUS 0-24, the loperamide-simeticone caplet was non-inferior to loperamide-simeticone tablets (3.4 vs. 3.3; one-sided 97.5 % confidence interval ≤0.5), with both significantly lower than S. boulardii (4.3; p < 0.001). The loperamide-simeticone groups had a shorter median TLUS [14.9 and 14.0 vs. 28.5 h (loperamide-simeticone caplet and chewable tablet groups, respectively, vs. S. boulardii); p < 0.001], TCRD (26.0 and 26.0 vs. 45.8 h; p < 0.001) and TCRAD (12.2 and 12.0 vs. 23.9 h; p < 0.005) than S. boulardii. Treatment effectiveness for overall illness, diarrhoea and abdominal discomfort relief was greater (p < 0.001) in the loperamide-simeticone groups than with S. boulardii. At 7-day follow-up most subjects reported passing stool at least once since the final study visit (loperamide-simeticone caplet 94.1 %, loperamide-simeticone chewable tablet 94.8 %, S. boulardii 97.0 %), did not experience continued or recurrent diarrhoea [loperamide-simeticone caplet 3.7 % (p < 0.03 vs. S. boulardii), loperamide-simeticone chewable tablet 3.7 %, S. boulardii 5.7 %] and felt completely well [loperamide-simeticone caplet 96.3 % (p < 0.02 vs. S. boulardii), loperamide-simeticone chewable tablet 96.3 % (p < 0.02 vs. S. boulardii), S. boulardii 88.6 %]. All treatments were well-tolerated with few adverse events.\n\n\nCONCLUSIONS\nThe loperamide-simeticone caplet was non-inferior to the original loperamide-simeticone chewable tablet formulation; both formulations can be expected to demonstrate similar clinical efficacy in the relief of symptoms of acute diarrhoea. Both loperamide-simeticone formulations were superior to the S. boulardii capsule in the primary and secondary endpoints.\n\n\nCLINICAL TRIAL REGISTRATION\nClinicalTrials.gov identifier NCT00807326.", "title": "" }, { "docid": "22ad4568fbf424592c24783fb3037f62", "text": "We propose an unsupervised learning technique for extracting information about authors and topics from large text collections. We model documents as if they were generated by a two-stage stochastic process. An author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words. The probability distribution over topics in a multi-author paper is a mixture of the distributions associated with the authors. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to three large text corpora: 150,000 abstracts from the CiteSeer digital library, 1740 papers from the Neural Information Processing Systems (NIPS) Conferences, and 121,000 emails from the Enron corporation. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, parsing of abstracts by topics and authors, and detection of unusual papers by specific authors. Experiments based on perplexity scores for test documents and precision-recall for document retrieval are used to illustrate systematic differences between the proposed author-topic model and a number of alternatives. Extensions to the model, allowing for example, generalizations of the notion of an author, are also briefly discussed.", "title": "" }, { "docid": "1638f79eff48774b65051468dc9d4167", "text": "Past research suggests that a lower waist-to-chest ratio (WCR) in men (i.e., narrower waist and broader chest) is viewed as attractive by women. However, little work has directly examined why low WCRs are preferred. The current work merged insights from theory and past research to develop a model examining perceived dominance, fitness, and protection ability as mediators of to WCR-attractiveness relationship. These mediators and their link to both short-term (sexual) and long-term (relational) attractiveness were simultaneously tested by having 151 women rate one of 15 avatars, created from 3D body scans. Men with lower WCR were perceived as more physically dominant, physically fit, and better able to protect loved ones; these characteristics differentially mediated the effect of WCR on short-term, long-term, and general attractiveness ratings. Greater understanding of the judgments women form regarding WCR may yield insights into motivations by men to manipulate their body image.", "title": "" }, { "docid": "faf770aba28d13e07573b5bf65db1863", "text": "In the emerging electronic environment, knowing how to create customercentered Web sites is of great importance. This paper reports two studies on user perceptions of Web sites. First, Kano’s model of quality was used in an exploratory investigation of customer quality expectations for a specific type of site (CNN.com). The quality model was then extended by treating broader site types/domains. The results showed that (1) customers’ quality expectations change over time, and thus no single quality checklist will be good for very long, (2) the Kano model can be used as a framework or method for identifying quality expectations and the time transition of quality factors, (3) customers in a Web domain do not regard all quality factors as equally important, and (4) the rankings of important quality factors dif fer from one Web domain to another, but certain factors were regarded as highly impor tant across all the domains studied.", "title": "" }, { "docid": "dc13ecaf82ee33f24f8a435ac3eaed5e", "text": "The business world is rapidly digitizing as companies embrace sensors, mobile devices, radio frequency identification, audio and video streams, software logs, and the Internet to predict needs, avert fraud and waste, understand relationships, and connect with stakeholders both internal and external to the firm. Digitization creates challenges because for most companies it is unevenly distributed throughout the organization: in a 2013 survey, only 39% of company-wide investment in digitization was identified as being in the IT budget (Weill and Woerner, 2013a). This uneven, disconnected investment makes it difficult to consolidate and simplify the increasing amount of data that is one of the outcomes of digitization. This in turn makes it more difficult to derive insight – and then proceed based on that insight. Early big data research identified over a dozen characteristics of data (e.g., location, network associations, latency, structure, softness) that challenge extant data management practices (Santos and Singer, 2012). Constantiou and Kallinikos’ article describes how the nature of big data affects the ability to derive insight, and thus inhibits strategy creation. One of the important insights of this article is how big data challenges the premises and the time horizons of strategy making. Much of big data, while seemingly valuable, does not fit into the recording, measurement, and assessment systems that enterprises have built up to aid in enterprise decision making. And constantly modified and volatile data doesn’t easily form into stable interpretable patterns, confounding prediction. As they note, a focus on real-time data ‘undermines long-term planning, and reframes the trade-offs between short-term and long-term decisions’ (9). While Constantiou and Kallinikos describe the challenges that big data poses to strategy creation, they do not offer insights about how enterprises might ameliorate or even overcome those challenges. Big data is here to stay and every enterprise will have to accommodate the problematic nature of big data as it decides on a course of action. This commentary is an effort to show how big data is being used in practice to craft strategy and the company business model. Research at the MIT Center for Information Systems Research has found that the upsurge in digitization, and the accompanying increase in the amount of data, has prompted companies to reexamine their fundamental business models and explore opportunities to improve and innovate. In both cases, companies are not replacing their business strategy toolboxes, but rather are using existing toolboxes more effectively – they now have access to essential data needed to solve problems or gain insights that was not possible to collect before. The results are quite exciting.", "title": "" }, { "docid": "79453a45e1376e1d4cd08002b5e61ac0", "text": "Appropriate selection of learning algorithms is essential for the success of data mining. Meta-learning is one approach to achieve this objective by identifying a mapping from data characteristics to algorithm performance. Appropriate data characterization is, thus, of vital importance for the meta-learning. To this effect, a variety of data characterization techniques, based on three strategies including simple measure, statistical measure and information theory based measure, have been developed, however, the quality of them is still needed to be improved. This paper presents new measures to characterise datasets for meta-learning based on the idea to capture the characteristics from the structural shape and size of the decision tree induced from the dataset. Their effectiveness is illustrated by comparing to the results obtained by the classical data characteristics techniques, including DCT that is the most wide used technique in meta-learning and Landmarking that is the most recently developed method and produced better performance comparing to DCT.", "title": "" }, { "docid": "4bcbe82e888e504fdc5f230de79e14e7", "text": "In this paper, we present results of an empirical investigation into the social structure of YouTube, addressing friend relations and their correlation with tags applied to uploaded videos. Results indicate that YouTube producers are strongly linked to others producing similar content. Furthermore, there is a socially cohesive core of producers of mixed content, with smaller cohesive groups around Korean music video and anime music videos. Thus, social interaction on YouTube appears to be structured in ways similar to other social networking sites, but with greater semantic coherence around content. These results are explained in terms of the relationship of video producers to the tagging of uploaded content on the site.", "title": "" }, { "docid": "4357e361fd35bcbc5d6a7c195a87bad1", "text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.", "title": "" }, { "docid": "26241f7523ce36cb51fd2f4d91b827d0", "text": "We introduce Mix & Match (M&M) – a training framework designed to facilitate rapid and effective learning in RL agents, especially those that would be too slow or too challenging to train otherwise. The key innovation is a procedure that allows us to automatically form a curriculum over agents. Through such a curriculum we can progressively train more complex agents by, effectively, bootstrapping from solutions found by simpler agents. In contradistinction to typical curriculum learning approaches, we do not gradually modify the tasks or environments presented, but instead use a process to gradually alter how the policy is represented internally. We show the broad applicability of our method by demonstrating significant performance gains in three different experimental setups: (1) We train an agent able to control more than 700 actions in a challenging 3D first-person task; using our method to progress through an action-space curriculum we achieve both faster training and better final performance than one obtains using traditional methods. (2) We further show that M&M can be used successfully to progress through a curriculum of architectural variants defining an agents internal state. (3) Finally, we illustrate how a variant of our method can be used to improve agent performance in a multitask setting.", "title": "" }, { "docid": "d2305c7218a9e2bb52c7b9828bb8cdb4", "text": "The World Wide Web, and online social networks in particular, have increased connectivity between people such that information can spread to millions of people in a matter of minutes. This form of online collective contagion has provided many benefits to society, such as providing reassurance and emergency management in the immediate aftermath of natural disasters. However, it also poses a potential risk to vulnerable Web users who receive this information and could subsequently come to harm. One example of this would be the spread of suicidal ideation in online social networks, about which concerns have been raised. In this paper we report the results of a number of machine classifiers built with the aim of classifying text relating to suicide on Twitter. The classifier distinguishes between the more worrying content, such as suicidal ideation, and other suicide-related topics such as reporting of a suicide, memorial, campaigning and support. It also aims to identify flippant references to suicide. We built a set of baseline classifiers using lexical, structural, emotive and psychological features extracted from Twitter posts. We then improved on the baseline classifiers by building an ensemble classifier using the Rotation Forest algorithm and a Maximum Probability voting classification decision method, based on the outcome of base classifiers. This achieved an F-measure of 0.728 overall (for 7 classes, including suicidal ideation) and 0.69 for the suicidal ideation class. We summarise the results by reflecting on the most significant predictive principle components of the suicidal ideation class to provide insight into the language used on Twitter to express suicidal ideation.", "title": "" }, { "docid": "fcd349147673758eedb6dba0cd7af850", "text": "We present VideoLSTM for end-to-end sequence learning of actions in video. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be exploited for action localization by relying on the action class label and temporal attention smoothing. Experiments on UCF101, HMDB51 and THUMOS13 reveal the benefit of the video-specific adaptations of VideoLSTM in isolation as well as when integrated in a combined architecture. It compares favorably against other LSTM architectures for action classification and especially action localization.", "title": "" }, { "docid": "817afe747e4079d11fed37f8fb748de8", "text": "Vehicle re-identification is a process of recognising a vehicle at different locations. It has attracted increasing amounts of attention due to the rapidly-increasing number of vehicles. Identification of two vehicles of the same model is even more difficult than the identification of identical twin humans. Further-more, there is no vehicle re-identification dataset that considers the interference caused by the presence of other vehicles of the same model. Therefore, to provide a fair comparison and facilitate future research into vehicle re-identification, this paper constructs a new dataset called the vehicle re-identification dataset-1 (1 VRID-1). VRID-1 contains 10,000 images captured in daytime of 1,000 individual vehicles of the ten most common vehicle models. For each vehicle model, there are 100 individual vehicles, and for each of these, there are ten images captured at different locations. The images in VRID-1 were captured by 326 surveillance cameras, and thus there are various vehicles poses and levels of illumination. Yet, it provides images of good enough quality for the evaluation of vehicle re-identification in a practical surveillance environment. In addition, according to the characteristics of vehicle morphology, this paper proposes a deep learning-based method to extract multi-dimensional robust features for vehicle re-identification using convolutional neural networks. Experimental results on the VRID-1 dataset demonstrate that it can deal with interference from vehicles of the same model, and is effective and practical for vehicle re-identification.", "title": "" } ]
scidocsrr
08762179e70c8d20abdbcf830d5d5001
We know what you want to buy: a demographic-based system for product recommendation on microblogs
[ { "docid": "14838947ee3b95c24daba5a293067730", "text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.", "title": "" } ]
[ { "docid": "8e06dbf42df12a34952cdd365b7f328b", "text": "Data and theory from prism adaptation are reviewed for the purpose of identifying control methods in applications of the procedure. Prism exposure evokes three kinds of adaptive or compensatory processes: postural adjustments (visual capture and muscle potentiation), strategic control (including recalibration of target position), and spatial realignment of various sensory-motor reference frames. Muscle potentiation, recalibration, and realignment can all produce prism exposure aftereffects and can all contribute to adaptive performance during prism exposure. Control over these adaptive responses can be achieved by manipulating the locus of asymmetric exercise during exposure (muscle potentiation), the similarity between exposure and post-exposure tasks (calibration), and the timing of visual feedback availability during exposure (realignment).", "title": "" }, { "docid": "31da7b5b403ca92dde4d4c590a900aa1", "text": "In this paper, a new approach for moving an inpipe robot inside underground urban gas pipelines is proposed. Since the urban gas supply system is composed of complicated configurations of pipelines, the inpipe inspection requires a robot with outstanding mobility and corresponding control algorithms to apply for. In advance, this paper introduces a new miniature miniature inpipe robot, called MRINSPECT (Multifunctional Robotic crawler for INpipe inSPECTion) IV, which has been developed for the inspection of urban gas pipelines with a nominal 4-inch inside diameter. Its mechanism for steering with differential–drive wheels arranged three-dimensionally makes itself easily adjust to most pipeline configurations and provides excellent mobility in navigation. Also, analysis for pipelines with fittings are given in detail and geometries of the fittings are mathematically described. It is prerequisite to estimate moving pattern of the robot while passing through the fittings and based on the analysis, a method modulating speed of each drive wheel is proposed. Though modulation of speed is very important during proceeding thought the fittings, it is not easy to control the speeds because each wheel of the robot has contact with the walls having different curvatures. A new and simple way of controlling the speed is developed based on the analysis of the geometrical features of the fittings. This algorithm has the advantage to be applicable without using complicated sensor information. To confirm the effectiveness of the proposed method experiments are performed and additional considerations for the design of an inpipe robot are discussed.", "title": "" }, { "docid": "c6783f80d6b4ce46c9e95bdbd140fb7c", "text": "In Traditional environments, there are many advantages of distributed data warehouses. Distributed processing is the efficient way to increase efficiency of data. But the efficiency of query processing is a critical issue in data warehousing system, as decision support applications require minimum response times to answer complex, ad-hoc queries having aggregations, multi-ways joins overvast repositories of data. To achieve this, the fragmentation of data warehouse is the best to reduce the query execution time. The execution time reduces when queries runs over smaller datasets. The system performance is increased by allowing data to be spread across datamarts. So, it is very important to manage an appropriate methodology for data fragmentation and fragment allocation. Here focus is on the distributed data warehouses, which combines the known predicate construction techniques with a clustering method to fragment data warehouse relations by using the data miningbased horizontal fragmentation methodology for a relational DDW environment. DW decentralization gives the better performance; in the fragments are allocated to the corresponding site according to their frequency.", "title": "" }, { "docid": "8856fa1c0650970da31fae67cd8dcd86", "text": "In this paper, a new topology for rectangular waveguide bandpass and low-pass filters is presented. A simple, accurate, and robust design technique for these novel meandered waveguide filters is provided. The proposed filters employ a concatenation of ±90° $E$ -plane mitered bends (±90° EMBs) with different heights and lengths, whose dimensions are consecutively and independently calculated. Each ±90° EMB satisfies a local target reflection coefficient along the device so that they can be calculated separately. The novel structures allow drastically reduce the total length of the filters and embed bends if desired, or even to provide routing capabilities. Furthermore, the new meandered topology allows the introduction of transmission zeros above the passband of the low-pass filter, which can be controlled by the free parameters of the ±90° EMBs. A bandpass and a low-pass filter with meandered topology have been designed following the proposed novel technique. Measurements of the manufactured prototypes are also included to validate the novel topology and design technique, achieving excellent agreement with the simulation results.", "title": "" }, { "docid": "2699d99d61fd2afc9c37d598de90624c", "text": "Information security risks have become a significant concern for users of computer information technology. However, users' behavior of acceptance and actual use of available information security solutions has not been commensurate with the level of their information security concerns. Traditional technology acceptance theory (TAM) emphasizes the factors of perceived usefulness and perceived ease of use in acceptance of technology. There has been little research focus and consensus on the role of knowledge in user adoptions of information security solutions. This paper proposes a new and adapted model of technology acceptance that focuses on the relationship between users' knowledge of information security and their behavioral intention to use information security solutions. This study employs a survey method that measures users' knowledge of information security and their attitude and intention toward using information security solutions. Statistical analysis of the results indicates a positive correlation between user knowledge of information security and user intention to adopt and use information security solutions.", "title": "" }, { "docid": "03e48fbf57782a713bd218377290044c", "text": "Several researchers have shown that the efficiency of value iteration, a dynamic programming algorithm for Markov decision processes, can be improved by prioritizing the order of Bellman backups to focus computation on states where the value function can be improved the most. In previous work, a priority queue has been used to order backups. Although this incurs overhead for maintaining the priority queue, previous work has argued that the overhead is usually much less than the benefit from prioritization. However this conclusion is usually based on a comparison to a non-prioritized approach that performs Bellman backups on states in an arbitrary order. In this paper, we show that the overhead for maintaining the priority queue can be greater than the benefit, when it is compared to very simple heuristics for prioritizing backups that do not require a priority queue. Although the order of backups induced by our simple approach is often sub-optimal, we show that its smaller overhead allows it to converge faster than other state-of-the-art priority-based solvers.", "title": "" }, { "docid": "e72872277a33dcf6d5c1f7e31f68a632", "text": "Tilt rotor unmanned aerial vehicle (TRUAV) with ability of hovering and high-speed cruise has attached much attention, but its transition control is still a difficult point because of varying dynamics. This paper proposes a multi-model adaptive control (MMAC) method for a quad-TRUAV, and the stability in the transition procedure could be ensured by considering corresponding dynamics. For safe transition, tilt corridor is considered firstly, and actual flight status should locate within it. Then, the MMAC controller is constructed according to mode probabilities, which are calculated by solving a quadratic programming problem based on a set of input- output plant models. Compared with typical gain scheduling control, this method could ensure transition stability more effectively.", "title": "" }, { "docid": "7f7a67af972d26746ce1ae0c7ec09499", "text": "We describe Microsoft's conversational speech recognition system, in which we combine recent developments in neural-network-based acoustic and language modeling to advance the state of the art on the Switchboard recognition task. Inspired by machine learning ensemble techniques, the system uses a range of convolutional and recurrent neural networks. I-vector modeling and lattice-free MMI training provide significant gains for all acoustic model architectures. Language model rescoring with multiple forward and backward running RNNLMs, and word posterior-based system combination provide a 20% boost. The best single system uses a ResNet architecture acoustic model with RNNLM rescoring, and achieves a word error rate of 6.9% on the NIST 2000 Switchboard task. The combined system has an error rate of 6.2%, representing an improvement over previously reported results on this benchmark task.", "title": "" }, { "docid": "5a25619cebf454191a0325d3f81e09b7", "text": "Word Sense Disambiguation (WSD) is an important and challenging task in the area of Natural Language Processing (NLP) where the task is to find the correct sense of an ambiguous word given its context. There have been very few attempts on WSD in Bengali or in Indian languages. The k-Nearest-Neighbor (k-NN) algorithm is a very well known and popular method for text classification. The k-NN algorithm determines the classification of a new sample from its k nearest neighbors. In this paper, we present how k-NN algorithm can be effectively applied to the task of WSD in Bengali. The k-NN algorithm achieved an accuracy of over 71% in a WSD task in Bengali reported in this paper.", "title": "" }, { "docid": "dc9547eb3de2bb805b9473997377feb9", "text": "A repeated-measures, waiting list control design was used to assess efficacy of a social skills intervention for autistic spectrum children focused on individual and group LEGO play. The intervention combined aspects of behavior therapy, peer modeling and naturalistic communication strategies. Close interaction and joint attention to task play an important role in both group and individual therapy activities. The goal of treatment was to improve social competence (SC) which was construed as reflecting three components: (1) motivation to initiate social contact with peers; (2) ability to sustain interaction with peers for a period of time: and (3) overcoming autistic symptoms of aloofness and rigidity. Measures for the first two variables were based on observation of subjects in unstructured situations with peers; and the third variable was assessed using a structured rating scale, the SI subscale of the GARS. Results revealed significant improvement on all three measures at both 12 and 24 weeks with no evidence of gains during the waiting list period. No gender differences were found on outcome, and age of clients was not correlated with outcome. LEGO play appears to be a particularly effective medium for social skills intervention, and other researchers and clinicians are encouraged to attempt replication of this work, as well as to explore use of LEGO in other methodologies, or with different clinical populations.", "title": "" }, { "docid": "9b451aa93627d7b44acc7150a1b7c2d0", "text": "BACKGROUND\nAerobic endurance exercise has been shown to improve higher cognitive functions such as executive control in healthy subjects. We tested the hypothesis that a 30-minute individually customized endurance exercise program has the potential to enhance executive functions in patients with major depressive disorder.\n\n\nMETHOD\nIn a randomized within-subject study design, 24 patients with DSM-IV major depressive disorder and 10 healthy control subjects performed 30 minutes of aerobic endurance exercise at 2 different workload levels of 40% and 60% of their predetermined individual 4-mmol/L lactic acid exercise capacity. They were then tested with 4 standardized computerized neuropsychological paradigms measuring executive control functions: the task switch paradigm, flanker task, Stroop task, and GoNogo task. Performance was measured by reaction time. Data were gathered between fall 2000 and spring 2002.\n\n\nRESULTS\nWhile there were no significant exercise-dependent alterations in reaction time in the control group, for depressive patients we observed a significant decrease in mean reaction time for the congruent Stroop task condition at the 60% energy level (p = .016), for the incongruent Stroop task condition at the 40% energy level (p = .02), and for the GoNogo task at both energy levels (40%, p = .025; 60%, p = .048). The exercise procedures had no significant effect on reaction time in the task switch paradigm or the flanker task.\n\n\nCONCLUSION\nA single 30-minute aerobic endurance exercise program performed by depressed patients has positive effects on executive control processes that appear to be specifically subserved by the anterior cingulate.", "title": "" }, { "docid": "594d65747dd43e1d445775b1c2ea7ebf", "text": "Current two-dimensional face recognition approaches can obtain a good performance only under constrained environments. However, in the real applications, face appearance changes significantly due to different illumination, pose, and expression. Face recognizers based on different representations of the input face images have different sensitivity to these variations. Therefore, a combination of different face classifiers which can integrate the complementary information should lead to improved classification accuracy. We use the sum rule and RBF-based integration strategies to combine three commonly used face classifiers based on PCA, ICA and LDA representations. Experiments conducted on a face database containing 206 subjects (2,060 face images) show that the proposed classifier combination approaches outperform individual classifiers.", "title": "" }, { "docid": "571a4de4ac93b26d55252dab86e2a0d3", "text": "Amnestic mild cognitive impairment (MCI) is a degenerative neurological disorder at the early stage of Alzheimer’s disease (AD). This work is a pilot study aimed at developing a simple scalp-EEG-based method for screening and monitoring MCI and AD. Specifically, the use of graphical analysis of inter-channel coherence of resting EEG for the detection of MCI and AD at early stages is explored. Resting EEG records from 48 age-matched subjects (mean age 75.7 years)—15 normal controls (NC), 16 with early-stage MCI, and 17 with early-stage AD—are examined. Network graphs are constructed using pairwise inter-channel coherence measures for delta–theta, alpha, beta, and gamma band frequencies. Network features are computed and used in a support vector machine model to discriminate among the three groups. Leave-one-out cross-validation discrimination accuracies of 93.6% for MCI vs. NC (p < 0.0003), 93.8% for AD vs. NC (p < 0.0003), and 97.0% for MCI vs. AD (p < 0.0003) are achieved. These results suggest the potential for graphical analysis of resting EEG inter-channel coherence as an efficacious method for noninvasive screening for MCI and early AD.", "title": "" }, { "docid": "5a58ab9fe86a4d0693faacfc238fb35c", "text": "Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing complexity of mobile applications, by offloading the computational workloads from local devices to the cloud. Current research supports workload offloading through appropriate application partitioning and remote method execution, but generally ignores the impact of wireless network characteristics on such offloading. Wireless data transmissions incurred by remote method execution consume a large amount of additional energy during transmission intervals when the network interface stays in the high-power state, and deferring these transmissions increases the response delay of mobile applications. In this paper, we adaptively balance the tradeoff between energy efficiency and responsiveness of mobile applications by developing application-aware wireless transmission scheduling algorithms. We take both causality and run-time dynamics of application method executions into account when deferring wireless transmissions, so as to minimize the wireless energy cost and satisfy the application delay constraint with respect to the practical system contexts. Systematic evaluations show that our scheme significantly improves the energy efficiency of workload offloading over realistic smartphone applications.", "title": "" }, { "docid": "e6b69d4bd6c413e4aaaa0c927db0d55c", "text": "In this study, the inhibitory effect of TCE on nitrification process was investigated with an enriched nitrifier culture. TCE was found to be a competitive inhibitor of ammonia oxidation and the inhibition constant (K I ) was determined as 666–802 μg/l. The TCE affinity for the AMO enzyme was significantly higher than ammonium. The effect of TCE on ammonium utilization was evaluated with linearized plots of Monod equation (e.g., Lineweaver–Burk, Hanes–Woolf and Eadie–Hofstee plots) and non-linear least square regression (NLSR). No significant differences were found among these data evaluation methods in terms of kinetic parameters obtained.", "title": "" }, { "docid": "91446020934f6892a3a4807f5a7b3829", "text": "Collaborative filtering recommends items to a user based on the interests of other users having similar preferences. However, high dimensional, sparse data result in poor performance in collaborative filtering. This paper introduces an approach called multiple metadata-based collaborative filtering (MMCF), which utilizes meta-level information to alleviate this problem, e.g., metadata such as genre, director, and actor in the case of movie recommendation. MMCF builds a k-partite graph of users, movies and multiple metadata, and extracts implicit relationships among the metadata and between users and the metadata. Then the implicit relationships are propagated further by applying random walk process in order to alleviate the problem of sparseness in the original data set. The experimental results show substantial improvement over previous approaches on the real Netflix movie dataset.", "title": "" }, { "docid": "91b54989dae7d79e593e461e2390e018", "text": "The increasing popularity of social media has a large impact on the evolution of language usage. The evolution includes the transformation of some existing terms to enhance the expression of the writer’s emotion and feeling. Text processing tasks on social media texts have become much more challenging. In this paper, we propose LexToPlus, a Thai lexeme tokenizer with term normalization process. LexToPlus is designed to handle the intentional errors caused by the repeated characters at the end of words. LexToPlus is a dictionary-based parser which detects existing terms in a dictionary. Unknown tokens with repeated characters are merged and removed. We performed statistical analysis and evaluated the performance of the proposed approach by using a Twitter corpus. The experimental results show that the proposed algorithm yields an accuracy of 96.3% on a test data set. The errors are mostly caused by the out-ofvocabulary problem which can be solved by adding newly found terms into the dictionary.", "title": "" }, { "docid": "6932912b1b880014b8eb2d1b796d7a91", "text": "The ability to identify authors of computer programs based on their coding style is a direct threat to the privacy and anonymity of programmers. While recent work found that source code can be attributed to authors with high accuracy, attribution of executable binaries appears to be much more difficult. Many distinguishing features present in source code, e.g. variable names, are removed in the compilation process, and compiler optimization may alter the structure of a program, further obscuring features that are known to be useful in determining authorship. We examine programmer de-anonymization from the standpoint of machine learning, using a novel set of features that include ones obtained by decompiling the executable binary to source code. We adapt a powerful set of techniques from the domain of source code authorship attribution along with stylistic representations embedded in assembly, resulting in successful deanonymization of a large set of programmers. We evaluate our approach on data from the Google Code Jam, obtaining attribution accuracy of up to 96% with 100 and 83% with 600 candidate programmers. We present an executable binary authorship attribution approach, for the first time, that is robust to basic obfuscations, a range of compiler optimization settings, and binaries that have been stripped of their symbol tables. We perform programmer de-anonymization using both obfuscated binaries, and real-world code found “in the wild” in single-author GitHub repositories and the recently leaked Nulled.IO hacker forum. We show that programmers who would like to remain anonymous need to take extreme countermeasures to protect their privacy.", "title": "" }, { "docid": "c4f6ccec24ff18ba839a83119b125f04", "text": "The growing rehabilitation and consumer movement toward independent community living for disabled adults has placed new demands on the health care delivery system. ProgTams must be developed for the disabled adult that provide direct training in adaptive community skills, such as banking, budgeting, consumer advocacy, personal health care, and attendant management. An Independent Living Skills Training Program that uses a psychoeducational model is described. To date, 17 multiply handicapped adults, whose average length of institutionalization was I 1.9 years, have participated in the program. Of these 17, 58.8% returned to community living and 23.5% are waiting for openings m accessible housing units.", "title": "" }, { "docid": "8ff11342a85999b5de70f9aa48c2a201", "text": "Rectified linear activation units are important components for state-of-the-art deep convolutional networks. In this paper, we propose a novel S-shaped rectified linear activation unit (SReLU) to learn both convex and non-convex functions, imitating the multiple function forms given by the two fundamental laws, namely the Webner-Fechner law and the Stevens law, in psychophysics and neural sciences. Specifically, SReLU consists of three piecewise linear functions, which are formulated by four learnable parameters. The SReLU is learned jointly with the training of the whole deep network through back propagation. During the training phase, to initialize SReLU in different layers, we propose a “freezing” method to degenerate SReLU into a predefined leaky rectified linear unit in the initial several training epochs and then adaptively learn the good initial values. SReLU can be universally used in the existing deep networks with negligible additional parameters and computation cost. Experiments with two popular CNN architectures, Network in Network and GoogLeNet on scale-various benchmarks including CIFAR10, CIFAR100, MNIST and ImageNet demonstrate that SReLU achieves remarkable improvement compared to other activation functions.", "title": "" } ]
scidocsrr
bc787c976e5f2c801c6b51cb80964195
Breaking the Barriers to True Augmented Reality
[ { "docid": "08d87fbc4a7f83f451707aef6f6b0342", "text": "This paper presents ZeroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional space. ZeroN serves as a tangible rep-resentation of a 3D coordinate of the virtual world through which users can see, feel, and control computation. To ac-complish this, we developed a magnetic control system that can levitate and actuate a permanent magnet in a pre-defined 3D volume. This is combined with an optical tracking and display system that projects images on the levitating object. We present applications that explore this new interaction modality. Users are invited to place or move the ZeroN object just as they can place objects on surfaces. For example, users can place the sun above physical objects to cast digital shadows, or place a planet that will start revolving based on simulated physical conditions. We describe the technology and interaction scenarios, discuss initial observations, and outline future development.", "title": "" }, { "docid": "e31f3642a238f0be69e1e7cd1cc95067", "text": "In the past, several systems have been presented that enable users to view occluded points of interest using Augmented Reality X-ray visualizations. It is challenging to design a visualization that provides correct occlusions between occluder and occluded objects while maximizing legibility. We have previously published an Augmented Reality X-ray visualization that renders edges of the occluder region over the occluded region to facilitate correct occlusions while providing foreground context. While this approach is simple and works in a wide range of situations, it provides only minimal context of the occluder object.", "title": "" } ]
[ { "docid": "b60e8a6f417d70499c7a6a251406c280", "text": "Details are presented of a low cost augmented-reality system for the simulation of ultrasound guided needle insertion procedures (tissue biopsy, abscess drainage, nephrostomy etc.) for interventional radiology education and training. The system comprises physical elements; a mannequin, a mock ultrasound probe and a needle, and software elements; generating virtual ultrasound anatomy and allowing data collection. These two elements are linked by a pair of magnetic 3D position sensors. Virtual anatomic images are generated based on anatomic data derived from full body CT scans of live humans. Details of the novel aspects of this system are presented including; image generation, registration and calibration.", "title": "" }, { "docid": "44ecfa6fb5c31abf3a035dea9e709d11", "text": "The issue of the variant vs. invariant in personality often arises in diVerent forms of the “person– situation” debate, which is based on a false dichotomy between the personal and situational determination of behavior. Previously reported data are summarized that demonstrate how behavior can vary as a function of subtle situational changes while individual consistency is maintained. Further discussion considers the personal source of behavioral invariance, the situational source of behavioral variation, the person–situation interaction, the nature of behavior, and the “personality triad” of persons, situations, and behaviors, in which each element is understood and predicted in terms of the other two. An important goal for future research is further development of theories and methods for conceptualizing and measuring the functional aspects of situations and of behaviors. One reason for the persistence of the person situation debate may be that it serves as a proxy for a deeper, implicit debate over values such as equality vs. individuality, determinism vs. free will, and Xexibility vs. consistency. However, these value dichotomies may be as false as the person–situation debate that they implicitly drive.  2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "318daea2ef9b0d7afe2cb08edcfe6025", "text": "Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.", "title": "" }, { "docid": "33324828efcc9ceafb8654d5a83a1dbf", "text": "Software engineering researchers solve problems of several different kinds. To do so, they produce several different kinds of results, and they should develop appropriate evidence to validate these results. They often report their research in conference papers. I analyzed the abstracts of research papers submitted to ICSE 2002 in order to identify the types of research reported in the submitted and accepted papers, and I observed the program committee discussions about which papers to accept. This report presents the research paradigms of the papers, common concerns of the program committee, and statistics on success rates. This information should help researchers design better research projects and write papers that present their results to best advantage.", "title": "" }, { "docid": "cc8cab769752154114d4d499b3e6f974", "text": "The quality of a biosensing system relies on the interfacial properties where bioactive species are immobilized. The design of the surface includes both the immobilization of the bioreceptor itself and the overall chemical preparation of the transducer surface. Hence, the sensitivity and specificity of such devices are directly related to the accessibility and activity of the immobilized molecules. The inertness of the surface that limits the nonspecific adsorption sets the background noise of the sensor. The specifications of the biosensor (signal-to-noise ratio) depend largely on the surface chemistry and preparation process of the biointerface. Lastly, a robust interface improves the stability and the reliability of biosensors. This chapter reports in detail the main surface coupling strategies spanning from random immobilization of native biospecies to uniform and oriented immobilization of site-specific modified biomolecules. The immobilization of receptors on various shapes of solid support is then introduced. Detection systems sensitive to surface phenomena require immobilization as very thin layers (two-dimensional biofunctionalization), whereas other detection systems accept thicker layers (threedimensional biofunctionalization) such as porous materials of high specific area that lead to large increase of signal detection. This didactical overview introduces each step of the biofunctionalization with respect to the diversity of biological molecules, their accessibility and resistance to nonspecific adsorption at interfaces.", "title": "" }, { "docid": "941cd6b47980ff8539b7124a48f160e5", "text": "Question Answering for complex questions is often modelled as a graph construction or traversal task, where a solver must build or traverse a graph of facts that answer and explain a given question. This “multi-hop” inference has been shown to be extremely challenging, with few models able to aggregate more than two facts before being overwhelmed by “semantic drift”, or the tendency for long chains of facts to quickly drift off topic. This is a major barrier to current inference models, as even elementary science questions require an average of 4 to 6 facts to answer and explain. In this work we empirically characterize the difficulty of building or traversing a graph of sentences connected by lexical overlap, by evaluating chance sentence aggregation quality through 9,784 manually-annotated judgements across knowledge graphs built from three freetext corpora (including study guides and Simple Wikipedia). We demonstrate semantic drift tends to be high and aggregation quality low, at between 0.04% and 3%, and highlight scenarios that maximize the likelihood of meaningfully combining information.", "title": "" }, { "docid": "7e03d09882c7c8fcab5df7a6bd12764f", "text": "This paper describes a background digital calibration technique based on bitwise correlation (BWC) to correct the capacitive digital-to-analog converter (DAC) mismatch error in successive-approximation-register (SAR) analog-to-digital converters (ADC's). Aided by a single-bit pseudorandom noise (PN) injected to the ADC input, the calibration engine extracts all bit weights simultaneously to facilitate a digital-domain correction. The analog overhead associated with this technique is negligible and the conversion speed is fully retained (in contrast to [1] in which the ADC throughput is halved). A prototype 12bit 50-MS/s SAR ADC fabricated in 90-nm CMOS measured a 66.5-dB peak SNDR and an 86.0-dB peak SFDR with calibration, while occupying 0.046 mm2 and dissipating 3.3 mW from a 1.2-V supply. The calibration logic is estimated to occupy 0.072 mm2 with a power consumption of 1.4 mW in the same process.", "title": "" }, { "docid": "3980da6e0c81bf029bbada09d7ea59e3", "text": "We study RF-enabled wireless energy transfer (WET) via energy beamforming, from a multi-antenna energy transmitter (ET) to multiple energy receivers (ERs) in a backscatter communication system such as RFID. The acquisition of the forward-channel (i.e., ET-to-ER) state information (F-CSI) at the ET (or RFID reader) is challenging, since the ERs (or RFID tags) are typically too energy-and-hardware-constrained to estimate or feedback the F-CSI. The ET leverages its observed backscatter signals to estimate the backscatter-channel (i.e., ET-to-ER-to-ET) state information (BS-CSI) directly. We first analyze the harvested energy obtained using the estimated BS-CSI. Furthermore, we optimize the resource allocation to maximize the total utility of harvested energy. For WET to single ER, we obtain the optimal channel-training energy in a semiclosed form. For WET to multiple ERs, we optimize the channel-training energy and the energy allocation weights for different energy beams. For the straightforward weighted-sum-energy (WSE) maximization, the optimal WET scheme is shown to use only one energy beam, which leads to unfairness among ERs and motivates us to consider the complicated proportional-fair-energy (PFE) maximization. For PFE maximization, we show that it is a biconvex problem, and propose a block-coordinate-descent-based algorithm to find the close-to-optimal solution. Numerical results show that with the optimized solutions, the harvested energy suffers slight reduction of less than 10%, compared to that obtained using the perfect F-CSI.", "title": "" }, { "docid": "cbba6c341bd0440874d6a882c944a60a", "text": "Mining software repositories at the source code level can provide a greater understanding of how software evolves. We present a tool for quickly comparing the source code of different versions of a C program. The approach is based on partial abstract syntax tree matching, and can track simple changes to global variables, types and functions. These changes can characterize aspects of software evolution useful for answering higher level questions. In particular, we consider how they could be used to inform the design of a dynamic software updating system. We report results based on measurements of various versions of popular open source programs. including BIND, OpenSSH, Apache, Vsftpd and the Linux kernel.", "title": "" }, { "docid": "566c6e3f9267fc8ccfcf337dc7aa7892", "text": "Research into the values motivating unsustainable behavior has generated unique insight into how NGOs and environmental campaigns contribute toward successfully fostering significant and long-term behavior change, yet thus far this research has not been applied to the domain of sustainable HCI. We explore the implications of this research as it relates to the potential limitations of current approaches to persuasive technology, and what it means for designing higher impact interventions. As a means of communicating these implications to be readily understandable and implementable, we develop a set of antipatterns to describe persuasive technology approaches that values research suggests are unlikely to yield significant sustainability wins, and a complementary set of patterns to describe new guidelines for what may become persuasive technology best practice.", "title": "" }, { "docid": "4eaa8c1af7a4f6f6c9de1e6de3f2495f", "text": "Technologies to support the Internet of Things are becoming more important as the need to better understand our environments and make them smart increases. As a result it is predicted that intelligent devices and networks, such as WSNs, will not be isolated, but connected and integrated, composing computer networks. So far, the IP-based Internet is the largest network in the world; therefore, there are great strides to connect WSNs with the Internet. To this end, the IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks. However, many open challenges remain, mostly due to the complex deployment characteristics of such systems and the stringent requirements imposed by various services wishing to make use of such complex systems. Thus, it becomes critically important to study how the current approaches to standardization in this area can be improved, and at the same time better understand the opportunities for the research community to contribute to the IoT field. To this end, this article presents an overview of current standards and research activities in both industry and academia.", "title": "" }, { "docid": "c4c3a9572659543c5cd5d1bb50a13bee", "text": "Optic disc (OD) is a key structure in retinal images. It serves as an indicator to detect various diseases such as glaucoma and changes related to new vessel formation on the OD in diabetic retinopathy (DR) or retinal vein occlusion. OD is also essential to locate structures such as the macula and the main vascular arcade. Most existing methods for OD localization are rule-based, either exploiting the OD appearance properties or the spatial relationship between the OD and the main vascular arcade. The detection of OD abnormalities has been performed through the detection of lesions such as hemorrhaeges or through measuring cup to disc ratio. Thus these methods result in complex and inflexible image analysis algorithms limiting their applicability to large image sets obtained either in epidemiological studies or in screening for retinal or optic nerve diseases. In this paper, we propose an end-to-end supervised model for OD abnormality detection. The most informative features of the OD are learned directly from retinal images and are adapted to the dataset at hand. Our experimental results validated the effectiveness of this current approach and showed its potential application.", "title": "" }, { "docid": "be5419a2175c5b21c8b7b1930a5a23f5", "text": "Disambiguation to Wikipedia (D2W) is the task of linking mentions of concepts in text to their corresponding Wikipedia entries. Most previous work has focused on linking terms in formal texts (e.g. newswire) to Wikipedia. Linking terms in short informal texts (e.g. tweets) is difficult for systems and humans alike as they lack a rich disambiguation context. We first evaluate an existing Twitter dataset as well as the D2W task in general. We then test the effects of two tweet context expansion methods, based on tweet authorship and topic-based clustering, on a state-of-the-art D2W system and evaluate the results. TITLE AND ABSTRACT IN BASQUE Testuinguruaren Hedapenaren Analisia eta Hobekuntza Mikroblogak Wikifikatzeko Esanahia Wikipediarekiko Argitzea (D2W) deritzo testuetan aurkitutako kontzeptuen aipamenak Wikipedian dagozkien sarrerei lotzeari. Aurreko lan gehienek testu formalak (newswire, esate baterako) lotu dituzte Wikipediarekin. Testu informalak (tweet-ak, esate baterako) lotzea, ordea, zaila da bai sistementzat eta baita gizakiontzat ere, argipena erraztuko luketen testuingururik ez dutelako. Lehenik eta behin, Twitter-en gainean sortutako datu-sorta bat, eta D2W ataza bera ebaluatzen ditugu. Ondoren, egungo D2W sistema baten gainean testuingurua hedatzeko bi teknika aztertu eta ebaluatzen ditugu. Bi teknika hauek tweet-aren egilean eta gaikako multzokatze metodo batean oinarritzen dira.", "title": "" }, { "docid": "94bb7d2329cbea921c6f879090ec872d", "text": "We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. An interactive version of this paper is available at https://worldmodels.github.io", "title": "" }, { "docid": "ac0e77985a38a3fc024de8a6f504a98c", "text": "High-protein, low-carbohydrate (HPLC) diets are common in cats, but their effect on the gut microbiome has been ignored. The present study was conducted to test the effects of dietary protein:carbohydrate ratio on the gut microbiota of growing kittens. Male domestic shorthair kittens were raised by mothers fed moderate-protein, moderate-carbohydrate (MPMC; n 7) or HPLC (n 7) diets, and then weaned at 8 weeks onto the same diet. Fresh faeces were collected at 8, 12 and 16 weeks; DNA was extracted, followed by amplification of the V4–V6 region of the 16S rRNA gene using 454 pyrosequencing. A total of 384 588 sequences (average of 9374 per sample) were generated. Dual hierarchical clustering indicated distinct clustering based on the protein:carbohydrate ratio regardless of age. The protein:carbohydrate ratio affected faecal bacteria. Faecal Actinobacteria were greater (P< 0·05) and Fusobacteria were lower (P< 0·05) in MPMC-fed kittens. Faecal Clostridium, Faecalibacterium, Ruminococcus, Blautia and Eubacterium were greater (P< 0·05) in HPLC-fed kittens, while Dialister, Acidaminococcus, Bifidobacterium, Megasphaera and Mitsuokella were greater (P< 0·05) in MPMC-fed kittens. Principal component analysis of faecal bacteria and blood metabolites and hormones resulted in distinct clusters. Of particular interest was the clustering of blood TAG with faecal Clostridiaceae, Eubacteriaceae, Ruminococcaceae, Fusobacteriaceae and Lachnospiraceae; blood ghrelin with faecal Coriobacteriaceae, Bifidobacteriaceae and Veillonellaceae; and blood glucose, cholesterol and leptin with faecal Lactobacillaceae. The present results demonstrate that the protein:carbohydrate ratio affects the faecal microbiome, and highlight the associations between faecal microbes and circulating hormones and metabolites that may be important in terms of satiety and host metabolism.", "title": "" }, { "docid": "920c1b2b4720586b1eb90b08631d9e6f", "text": "Linear active-power-only power flow approximations are pervasive in the planning and control of power systems. However, AC power systems are governed by a system of nonlinear non-convex power flow equations. Existing linear approximations fail to capture key power flow variables including reactive power and voltage magnitudes, both of which are necessary in many applications that require voltage management and AC power flow feasibility. This paper proposes novel linear-programming models (the LPAC models) that incorporate reactive power and voltage magnitudes in a linear power flow approximation. The LPAC models are built on a polyhedral relaxation of the cosine terms in the AC equations, as well as Taylor approximations of the remaining nonlinear terms. Experimental comparisons with AC solutions on a variety of standard IEEE and Matpower benchmarks show that the LPAC models produce accurate values for active and reactive power, phase angles, and voltage magnitudes. The potential benefits of the LPAC models are illustrated on two “proof-of-concept” studies in power restoration and capacitor placement.", "title": "" }, { "docid": "d10ec03d91d58dd678c995ec1877c710", "text": "Major depressive disorders, long considered to be of neurochemical origin, have recently been associated with impairments in signaling pathways that regulate neuroplasticity and cell survival. Agents designed to directly target molecules in these pathways may hold promise as new therapeutics for depression.", "title": "" }, { "docid": "b85ad4f280359fec469dbb766d3f7bd8", "text": "As we write this chapter, the field of industrial– organizational psychology in the United States has survived its third attempt at a name change. To provide a little perspective, the moniker industrial psychology became popular after World War I, and described a field that was characterized by ability testing and vocational assessment (Koppes, 2003). The current label, industrial– organizational (I-O) psychology, was made official in 1973. The addition of organizational reflected the growing influence of social psychologists and organizational development consultants, as well as the intellectual and social milieu of the period (see Highhouse, 2007). The change to I-O psychology was more of a compromise than a solution—which may have succeeded only to the extent that everyone was equally dissatisfied. The first attempt to change this clunky label, therefore, occurred in 1976. Popular alternatives at the time were personnel psychology , business psychology , and psychology of work . The leading contender, however, was organizational psychology because, according to then-future APA Division 14 president Arthur MacKinney, “all of the Division’s work is grounded in organizational contexts” (MacKinney 1976, p. 2). The issue stalled before ever making it", "title": "" }, { "docid": "bfac4c835d49bef4ad961b8e324c4559", "text": "We describe a new annotation scheme for formalizing relation structures in research papers. The scheme has been developed through the investigation of computer science papers. Using the scheme, we are building a Japanese corpus to help develop information extraction systems for digital libraries. We report on the outline of the annotation scheme and on annotation experiments conducted on research abstracts from the IPSJ Journal.", "title": "" }, { "docid": "cd8bd76ecebbd939400b4724499f7592", "text": "Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depthspecific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.", "title": "" } ]
scidocsrr
477717b583d7b33aa37bdb9a169c2a01
Mutual Component Analysis for Heterogeneous Face Recognition
[ { "docid": "08e03ec7a26e00c92f799dfb6c07174e", "text": "Heterogeneous face recognition (HFR) involves matching two face images from alternate imaging modalities, such as an infrared image to a photograph or a sketch to a photograph. Accurate HFR systems are of great value in various applications (e.g., forensics and surveillance), where the gallery databases are populated with photographs (e.g., mug shot or passport photographs) but the probe images are often limited to some alternate modality. A generic HFR framework is proposed in which both probe and gallery images are represented in terms of nonlinear similarities to a collection of prototype face images. The prototype subjects (i.e., the training set) have an image in each modality (probe and gallery), and the similarity of an image is measured against the prototype images from the corresponding modality. The accuracy of this nonlinear prototype representation is improved by projecting the features into a linear discriminant subspace. Random sampling is introduced into the HFR framework to better handle challenges arising from the small sample size problem. The merits of the proposed approach, called prototype random subspace (P-RS), are demonstrated on four different heterogeneous scenarios: 1) near infrared (NIR) to photograph, 2) thermal to photograph, 3) viewed sketch to photograph, and 4) forensic sketch to photograph.", "title": "" }, { "docid": "64f2091b23a82fae56751a78d433047c", "text": "Aging variation poses a serious problem to automatic face recognition systems. Most of the face recognition studies that have addressed the aging problem are focused on age estimation or aging simulation. Designing an appropriate feature representation and an effective matching framework for age invariant face recognition remains an open problem. In this paper, we propose a discriminative model to address face matching in the presence of age variation. In this framework, we first represent each face by designing a densely sampled local feature description scheme, in which scale invariant feature transform (SIFT) and multi-scale local binary patterns (MLBP) serve as the local descriptors. By densely sampling the two kinds of local descriptors from the entire facial image, sufficient discriminatory information, including the distribution of the edge direction in the face image (that is expected to be age invariant) can be extracted for further analysis. Since both SIFT-based local features and MLBP-based local features span a high-dimensional feature space, to avoid the overfitting problem, we develop an algorithm, called multi-feature discriminant analysis (MFDA) to process these two local feature spaces in a unified framework. The MFDA is an extension and improvement of the LDA using multiple features combined with two different random sampling methods in feature and sample space. By random sampling the training set as well as the feature space, multiple LDA-based classifiers are constructed and then combined to generate a robust decision via a fusion rule. Experimental results show that our approach outperforms a state-of-the-art commercial face recognition engine on two public domain face aging data sets: MORPH and FG-NET. We also compare the performance of the proposed discriminative model with a generative aging model. A fusion of discriminative and generative models further improves the face matching accuracy in the presence of aging.", "title": "" }, { "docid": "804cee969d47d912d8bdc40f3a3eeb32", "text": "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches.", "title": "" }, { "docid": "60cb22e89255e33d5f06ee90627731a7", "text": "Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many computer vision-related tasks. We propose the multispectral neural networks (MSNN) to learn features from multicolumn deep neural networks and embed the penultimate hierarchical discriminative manifolds into a compact representation. The low-dimensional embedding explores the complementary property of different views wherein the distribution of each view is sufficiently smooth and hence achieves robustness, given few labeled training data. Our experiments show that spectrally embedding several deep neural networks can explore the optimum output from the multicolumn networks and consistently decrease the error rate compared with a single deep network.", "title": "" } ]
[ { "docid": "c8d5a8d13d3cd9e150537bd8957a4512", "text": "Classroom interactivity has a number of significant benefits: it promotes an active learning environment, provides greater feedback for lecturers, increases student motivation, and enables a learning community (Bishop, Dinkins, & Dominick, 2003; Mazur, 1998; McConnell et al., 2006). On the other hand, interactive activities for large classes (over 100 students) have proven to be quite difficult and, often, inefficient (Freeman & Blayney, 2005).", "title": "" }, { "docid": "fa396377fbec310c9d4b9792cc66f9b9", "text": "Attention-based deep learning model as a human-centered smart technology has become the state-of-the-art method in addressing relation extraction, while implementing natural language processing. How to effectively improve the computational performance of that model has always been a research focus in both academic and industrial communities. Generally, the structures of model would greatly affect the final results of relation extraction. In this article, a deep learning model with a novel structure is proposed. In our model, after incorporating the highway network into a bidirectional gated recurrent unit, the attention mechanism is additionally utilized in an effort to assign weights of key issues in the network structure. Here, the introduction of highway network could enable the proposed model to capture much more semantic information. Experiments on a popular benchmark data set are conducted, and the results demonstrate that the proposed model outperforms some existing relation extraction methods. Furthermore, the performance of our method is also tested in the analysis of geological data, where the relation extraction in Chinese geological field is addressed and a satisfactory display result is achieved.", "title": "" }, { "docid": "9239ff0e4c8849498f4b8eaae6826d8e", "text": "High employee turnover rate in Malaysia’s retail industry has become a major issue that needs to be addressed. This study determines the levels of job satisfaction, organizational commitment, and turnover intention of employees in a retail company in Malaysia. The relationships between job satisfaction and organizational commitment on turnover intention are also investigated. A questionnaire was developed using Job Descriptive Index, Organizational Commitment Questionnaire, and Lee and Mowday’s turnover intention items and data were collected from 62 respondents. The findings suggested that the respondents were moderately satisfied with job satisfaction facets such as promotion, work itself, co-workers, and supervisors but were unsatisfied with salary. They also had moderate commitment level with considerably high intention to leave the organization. All satisfaction facets (except for co-workers) and organizational commitment were significantly and negatively related to turnover intention. Based on the findings, retention strategies of retail employees were proposed. Keywords—Job satisfaction, organizational commitment, retail employees, turnover intention.", "title": "" }, { "docid": "23ae026d482a0d4805cac3bb0762aed0", "text": "Time series motifs are pairs of individual time series, or subsequences of a longer time series, which are very similar to each other. As with their discrete analogues in computational biology, this similarity hints at structure which has been conserved for some reason and may therefore be of interest. Since the formalism of time series motifs in 2002, dozens of researchers have used them for diverse applications in many different domains. Because the obvious algorithm for computing motifs is quadratic in the number of items, more than a dozen approximate algorithms to discover motifs have been proposed in the literature. In this work, for the first time, we show a tractable exact algorithm to find time series motifs. As we shall show through extensive experiments, our algorithm is up to three orders of magnitude faster than brute-force search in large datasets. We further show that our algorithm is fast enough to be used as a subroutine in higher level data mining algorithms for anytime classification, near-duplicate detection and summarization, and we consider detailed case studies in domains as diverse as electroencephalograph interpretation and entomological telemetry data mining.", "title": "" }, { "docid": "fe06ac2458e00c5447a255486189f1d1", "text": "The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.", "title": "" }, { "docid": "2515c04775dc0a1e1d96692da208c257", "text": "We present a computational method for extracting simple descriptions of high dimensional data sets in the form of simplicial complexes. Our method, called Mapper, is based on the idea of partial clustering of the data guided by a set of functions defined on the data. The proposed method is not dependent on any particular clustering algorithm, i.e. any clustering algorithm may be used with Mapper. We implement this method and present a few sample applications in which simple descriptions of the data present important information about its structure.", "title": "" }, { "docid": "c90b05657b7673257db617b62d0ed80c", "text": "Automated tongue image segmentation, in Chinese medicine, is difficult due to two special factors: 1) there are many pathological details on the surface of the tongue, which have a large influence on edge extraction; 2) the shapes of the tongue bodies captured from various persons (with different diseases) are quite different, so they are impossible to describe properly using a predefined deformable template. To address these problems, in this paper, we propose an original technique that is based on a combination of a bi-elliptical deformable template (BEDT) and an active contour model, namely the bi-elliptical deformable contour (BEDC). The BEDT captures gross shape features by using the steepest decent method on its energy function in the parameter space. The BEDC is derived from the BEDT by substituting template forces for classical internal forces, and can deform to fit local details. Our algorithm features fully automatic interpretation of tongue images and a consistent combination of global and local controls via the template force. We apply the BEDC to a large set of clinical tongue images and present experimental results.", "title": "" }, { "docid": "800aa2ecdf0a29c7fa7860c6b0618a6b", "text": "This paper presents three topological classes of dc-to-dc converters, totaling nine converters (each class with three buck, boost, and buck-boost voltage transfer function topologies), which offer continuous input and output energy flow, applicable and mandatory for renewable energy source, maximum power point tracking and maximum source energy extraction. A current sourcing output caters for converter module output parallel connection. The first class of three topologies employs both series input and output inductance, while anomalously the other two classes of six related topologies employ only either series input (three topologies) or series output (three topologies) inductance. All nine converter topologies employ the same elements, while additional load shunting capacitance creates a voltage sourcing output. Converter time-domain simulations and experimental results for the converters support and extol the concepts and analysis presented.", "title": "" }, { "docid": "fb67e237688deb31bd684c714a49dca5", "text": "In order to mitigate investments, stock price forecasting has attracted more attention in recent years. Aiming at the discreteness, non-normality, high-noise in high-frequency data, a support vector machine regression (SVR) algorithm is introduced in this paper. However, the characteristics in different periods of the same stock, or the same periods of different stocks are significantly different. So, SVR with fixed parameters is difficult to satisfy with the constantly changing data flow. To tackle this problem, an adaptive SVR was proposed for stock data at three different time scales, including daily data, 30-min data, and 5-min data. Experiments show that the improved SVR with dynamic optimization of learning parameters by particle swarm optimization can get a better result than compared methods including SVR and back-propagation neural network.", "title": "" }, { "docid": "4ba81ce5756f2311dde3fa438f81e527", "text": "To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions – especially those adding a token – to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience.", "title": "" }, { "docid": "5473962c6c270df695b965cbcc567369", "text": "Medical professionals need a reliable prediction methodology to diagnose cancer and distinguish between the different stages in cancer. Classification is a data mining function that assigns items in a collection to target groups or classes. C4.5 classification algorithm has been applied to SEER breast cancer dataset to classify patients into either “Carcinoma in situ” (beginning or pre-cancer stage) or “Malignant potential” group. Pre-processing techniques have been applied to prepare the raw dataset and identify the relevant attributes for classification. Random test samples have been selected from the pre-processed data to obtain classification rules. The rule set obtained was tested with the remaining data. The results are presented and discussed. Keywords— Breast Cancer Diagnosis, Classification, Clinical Data, SEER Dataset, C4.5 Algorithm", "title": "" }, { "docid": "c6e6099599be3cd2d1d87c05635f4248", "text": "PURPOSE\nThe Food Cravings Questionnaires are among the most often used measures for assessing the frequency and intensity of food craving experiences. However, there is a lack of studies that have examined specific cut-off scores that may indicate pathologically elevated levels of food cravings.\n\n\nMETHODS\nReceiver-Operating-Characteristic analysis was used to determine sensitivity and specificity of scores on the Food Cravings Questionnaire-Trait-reduced (FCQ-T-r) for discriminating between individuals with (n = 43) and without (n = 389) \"food addiction\" as assessed with the Yale Food Addiction Scale 2.0.\n\n\nRESULTS\nA cut-off score of 50 on the FCQ-T-r discriminated between individuals with and without \"food addiction\" with high sensitivity (85%) and specificity (93%).\n\n\nCONCLUSIONS\nFCQ-T-r scores of 50 and higher may indicate clinically relevant levels of trait food craving.\n\n\nLEVEL OF EVIDENCE\nLevel V, descriptive study.", "title": "" }, { "docid": "8589ec481e78d14fbeb3e6e4205eee50", "text": "This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ee8a54ee9cd0b3c9a57d8c5ae2b237c2", "text": "Relatively little is known about how commodity consumption amongst African-Americans affirms issues of social organization within society. Moreover, the lack of primary documentation on the attitudes of African-American (A-A) commodity consumers contributes to the distorting image of A-A adolescents who actively engage in name-brand sneaker consumption; consequently maintaining the stigma of A-A adolescents being ‘addicted to brands’ (Chin, 2001). This qualitative study sought to employ the attitudes of African-Americans from an urban/metropolitan high school in dialogue on the subject of commodity consumption; while addressing the concepts of structure and agency with respect to name-brand sneaker consumption. Additionally, this study integrated three theoretical frameworks that were used to assess the participants’ engagement as consumers of name-brand sneakers. Through a focus group and analysis of surveys, it was discovered that amongst the African-American adolescent population, sneaker consumption imparted a means of attaining a higher socio-economic status, while concurrently providing an outlet for ‘acting’ as agents within the constraints of a constructed social structure. This study develops a practical method of analyzing several issues within commodity consumption, specifically among African-American adolescents. Prior to an empirical application of several theoretical frameworks, the researcher assessed the role of sneaker production as it predates sneaker consumption. Labor-intensive production of name-brand footwear is almost exclusively located in Asia (Vanderbilt, 1998), and has become the formula for efficient, profitable production in name-brand sneaker factories. Moreover, the production of such footwear is controlled by the demand for commodified products in the global economy. Southeast Asian manufacturing facilities owned by popular athletic footwear companies generate between $830 million and $5 billion a year from sneaker consumption (Vanderbilt, 1998). The researcher asks, What are the characteristics that determine the role of African-American consumers within the name-brand sneaker industry? The manner in which athletic name-brand footwear is consumed is a process that is directly associated with the social satisfaction of the consumer (Stabile, 2000). In this study, the researcher investigated the attitudes of adolescents towards name-brand sneaker consumption and production in order to determine how their perceived socioeconomic status affected by their consumption. Miller (2002) suggests that the consumption practices of young African-Americans present a central understanding of the act of consumption itself. While an analysis of consumption is vital in determining how and to whom a product is marketed Chin (2001), whose argument will be discussed further into this study, McNair ScholarS JourNal • VoluMe 8 111 explicates that (commodity) consumption is significant because it provides an understanding of the socially constructed society in which economically disadvantaged children are a part of.", "title": "" }, { "docid": "967aae790b938ccb219ecf68965c5b02", "text": "This paper describes the control algorithms of the high speed mobile robot Kurt3D. Kurt3D drives up to 4 m/s autonomously and reliably in an unknown office environment. We present the reliable hardware, fast control cycle algorithms and a novel set value computation scheme for achieving these velocities. In addition we sketch a real-time capable laser based position tracking method that is well suited for driving with these velocities.", "title": "" }, { "docid": "68f74c4fc9d1afb00ac2ec0221654410", "text": "Most algorithms in 3-D Computer Vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle or fish-eye lens, generate a lot of non-linear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid.", "title": "" }, { "docid": "2e42ab12b43022d22b9459cfaea6f436", "text": "Treemaps provide an interesting solution for representing hierarchical data. However, most studies have mainly focused on layout algorithms and paid limited attention to the interaction with treemaps. This makes it difficult to explore large data sets and to get access to details, especially to those related to the leaves of the trees. We propose the notion of zoomable treemaps (ZTMs), an hybridization between treemaps and zoomable user interfaces that facilitates the navigation in large hierarchical data sets. By providing a consistent set of interaction techniques, ZTMs make it possible for users to browse through very large data sets (e.g., 700,000 nodes dispatched amongst 13 levels). These techniques use the structure of the displayed data to guide the interaction and provide a way to improve interactive navigation in treemaps.", "title": "" }, { "docid": "f9090b6e113445a268fc02894f7f846b", "text": "Reducing inventory levels is a major supply chain management challenge in automobile industries. With the development of information technology new cooperative supply chain contracts emerge such as Vendor-Managed Inventory (VMI). This research aims to look at the literature of information management of VMI and the Internet of Things, then analyzes information flow model of VMI system. The paper analyzes information flow management of VMI system in automobile parts inbound logistics based on the environment of Internet of Things.", "title": "" }, { "docid": "5339bd241f053214673ead767476077d", "text": "----------------------------------------------------------------------ABSTRACT----------------------------------------------------------This paper is a general survey of all the security issues existing in the Internet of Things (IoT) along with an analysis of the privacy issues that an end-user may face as a consequence of the spread of IoT. The majority of the survey is focused on the security loopholes arising out of the information exchange technologies used in Internet of Things. No countermeasure to the security drawbacks has been analyzed in the paper.", "title": "" }, { "docid": "cda6f812328d1a883b0c5938695981fe", "text": "This paper investigates the problem of weakly-supervised semantic segmentation, where image-level labels are used as weak supervision. Inspired by the successful use of Convolutional Neural Networks (CNNs) for fully-supervised semantic segmentation, we choose to directly train the CNNs over the oversegmented regions of images for weakly-supervised semantic segmentation. Although there are a few studies on CNNs-based weakly-supervised semantic segmentation, they have rarely considered the noise issue, i.e., the initial weak labels (e.g., social tags) may be noisy. To cope with this issue, we thus propose graph-boosted CNNs (GB-CNNs) for weakly-supervised semantic segmentation. In our GB-CNNs, the graph-based model provides the initial supervision for training the CNNs, and then the outcomes of the CNNs are used to retrain the graph-based model. This training procedure is iteratively implemented to boost the results of semantic segmentation. Experimental results demonstrate that the proposed model outperforms the state-of-the-art weakly-supervised methods. More notably, the proposed model is shown to be more robust in the noisy setting for weakly-supervised semantic segmentation.", "title": "" } ]
scidocsrr
2165f8582cce592f1f24abfee43fd049
NIR-VIS heterogeneous face recognition via cross-spectral joint dictionary learning and reconstruction
[ { "docid": "65118dccb8d5d9be4e21c46e7dde315c", "text": "In this paper, we will present a novel framework of utilizing periocular region for age invariant face recognition. To obtain age invariant features, we first perform preprocessing schemes, such as pose correction, illumination and periocular region normalization. And then we apply robust Walsh-Hadamard transform encoded local binary patterns (WLBP) on preprocessed periocular region only. We find the WLBP feature on periocular region maintains consistency of the same individual across ages. Finally, we use unsupervised discriminant projection (UDP) to build subspaces on WLBP featured periocular images and gain 100% rank-1 identification rate and 98% verification rate at 0.1% false accept rate on the entire FG-NET database. Compared to published results, our proposed approach yields the best recognition and identification results.", "title": "" } ]
[ { "docid": "7e1e475f5447894a6c246e7d47586c4b", "text": "Between 1983 and 2003 forty accidental autoerotic deaths (all males, 13-79 years old) have been investigated at the Institute of Legal Medicine in Hamburg. Three cases with a rather unusual scenery are described in detail: (1) a 28-year-old fireworker was found hanging under a bridge in a peculiar bound belt system. The autopsy and the reconstruction revealed signs of asphyxiation, feminine underwear, and several layers of plastic clothing. (2) A 16-year-old pupil dressed with feminine plastic and rubber utensils fixed and strangulated himself with an electric wire. (3) A 28-year-old handicapped man suffered from progressive muscular dystrophy and was nearly unable to move. His bizarre sexual fantasies were exaggerating: he induced a nurse to draw plastic bags over his body, close his mouth with plastic strips, and put him in a rubbish container where he died from suffocation.", "title": "" }, { "docid": "97abbb650710386d1e28533e8134c42c", "text": "Airway pressure limitation is now a largely accepted strategy in adult respiratory distress syndrome (ARDS) patients; however, some debate persists about the exact level of plateau pressure which can be safely used. The objective of the present study was to examine if the echocardiographic evaluation of right ventricular function performed in ARDS may help to answer to this question. For more than 20 years, we have regularly monitored right ventricular function by echocardiography in ARDS patients, during two different periods, a first (1980–1992) where airway pressure was not limited, and a second (1993–2006) where airway pressure was limited. By pooling our data, we can observe the effect of a large range of plateau pressure upon mortality rate and incidence of acute cor pulmonale. In this whole group of 352 ARDS patients, mortality rate and incidence of cor pulmonale were 80 and 56%, respectively, when plateau pressure was > 35 cmH2O; 42 and 32%, respectively, when plateau pressure was between 27 and 35 cmH2O; and 30 and 13%, respectively, when plateau pressure was < 27 cmH2O. Moreover, a clear interaction between plateau pressure and cor pulmonale was evidenced: whereas the odd ratio of dying for an increase in plateau pressure from 18–26 to 27–35 cm H2O in patients without cor pulmonale was 1.05 (p = 0.635), it was 3.32 in patients with cor pulmonale (p < 0.034). We hypothesize that monitoring of right ventricular function by echocardiography at bedside might help to control the safety of plateau pressure used in ARDS.", "title": "" }, { "docid": "7ec457bd4fe999fff11820acf0f73e6c", "text": "A comparative study between the antioxidant properties of peel (flavedo and albedo) and juice of some commercially grown citrus fruit (Rutaceae), grapefruit (Citrus paradisi), lemon (Citrus limon), lime (Citrusxaurantiifolia) and sweet orange (Citrus sinensis) was performed. Different in vitro assays were applied to the volatile and polar fractions of peels and to crude and polar fraction of juices: 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging capacity, reducing power and inhibition of lipid peroxidation using beta-carotene-linoleate model system in liposomes and thiobarbituric acid reactive substances (TBARS) assay in brain homogenates. Reducing sugars and phenolics were the main antioxidant compounds found in all the extracts. Peels polar fractions revealed the highest contents in phenolics, flavonoids, ascorbic acid, carotenoids and reducing sugars, which certainly contribute to the highest antioxidant potential found in these fractions. Peels volatile fractions were clearly separated using discriminant analysis, which is in agreement with their lowest antioxidant potential.", "title": "" }, { "docid": "c6cb6b1cb964d0e2eb8ad344ee4a62b3", "text": "Associative classifiers have proven to be very effective in classification problems. Unfortunately, the algorithms used for learning these classifiers are not able to adequately manage big data because of time complexity and memory constraints. To overcome such drawbacks, we propose a distributed association rule-based classification scheme shaped according to the MapReduce programming model. The scheme mines classification association rules (CARs) using a properly enhanced, distributed version of the well-known FP-Growth algorithm. Once CARs have been mined, the proposed scheme performs a distributed rule pruning. The set of survived CARs is used to classify unlabeled patterns. The memory usage and time complexity for each phase of the learning process are discussed, and the scheme is evaluated on seven real-world big datasets on the Hadoop framework, characterizing its scalability and achievable speedup on small computer clusters. The proposed solution for associative classifiers turns to be suitable to practically address ∗Corresponding Author: Tel: +39 05", "title": "" }, { "docid": "48a79db77ad6c565460095fa055260e4", "text": "In this paper, an improved method for eye extraction using deformable templates is proposed. This new method overcomes the shortcomings of traditional deformable template techniques for eye extraction, such as unexpected shrinking of the template and complexity of the updating procedure, while offering higher flexibility and accuracy A new size term and eye corner finder are introduced to prevent over-shrinking and improve speed and accuracy of fitting. The eye features are fitted in a pre-set order to reduce the complexity of the updating procedure and increase the flexibility. We demonstrate the success of this new method by extracting eye features in real images.", "title": "" }, { "docid": "4b557c498499c9bbb900d4983cc28426", "text": "Document clustering has not been well received as an information retrieval tool. Objections to its use fall into two main categories: first, that clustering is too slow for large corpora (with running time often quadratic in the number of documents); and second, that clustering does not appreciably improve retrieval.\nWe argue that these problems arise only when clustering is used in an attempt to improve conventional search techniques. However, looking at clustering as an information access tool in its own right obviates these objections, and provides a powerful new access paradigm. We present a document browsing technique that employs document clustering as its primary operation. We also present fast (linear time) clustering algorithms which support this interactive browsing paradigm.", "title": "" }, { "docid": "e2b3001513059a02cf053cadab6abb85", "text": "Data mining is the process of discovering meaningful new correlation, patterns and trends by sifting through large amounts of data, using pattern recognition technologies as well as statistical and mathematical techniques. Cluster analysis is often used as one of the major data analysis technique widely applied for many practical applications in emerging areas of data mining. Two of the most delegated, partition based clustering algorithms namely k-Means and Fuzzy C-Means are analyzed in this research work. These algorithms are implemented by means of practical approach to analyze its performance, based on their computational time. The telecommunication data is the source data for this analysis. The connection oriented broad band data is used to find the performance of the chosen algorithms. The distance (Euclidian distance) between the server locations and their connections are rearranged after processing the data. The computational complexity (execution time) of each algorithm is analyzed and the results are compared with one another. By comparing the result of this practical approach, it was found that the results obtained are more accurate, easy to understand and above all the time taken to process the data was substantially high in Fuzzy C-Means algorithm than the k-Means. © 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a8c4e25f6e2e6ec45c8f57e07c2a41c0", "text": "We describe the design and control of a wearable robotic device powered by pneumatic artificial muscle actuators for use in ankle-foot rehabilitation. The design is inspired by the biological musculoskeletal system of the human foot and lower leg, mimicking the morphology and the functionality of the biological muscle-tendon-ligament structure. A key feature of the device is its soft structure that provides active assistance without restricting natural degrees of freedom at the ankle joint. Four pneumatic artificial muscles assist dorsiflexion and plantarflexion as well as inversion and eversion. The prototype is also equipped with various embedded sensors for gait pattern analysis. For the subject tested, the prototype is capable of generating an ankle range of motion of 27° (14° dorsiflexion and 13° plantarflexion). The controllability of the system is experimentally demonstrated using a linear time-invariant (LTI) controller. The controller is found using an identified LTI model of the system, resulting from the interaction of the soft orthotic device with a human leg, and model-based classical control design techniques. The suitability of the proposed control strategy is demonstrated with several angle-reference following experiments.", "title": "" }, { "docid": "e15c37fd455c4cdc93ffa10ce2f07828", "text": "While human listeners excel at selectively attending to a conversation in a cocktail party, machine performance is still far inferior by comparison. We show that the cocktail party problem, or the speech separation problem, can be effectively approached via structured prediction. To account for temporal dynamics in speech, we employ conditional random fields (CRFs) to classify speech dominance within each time-frequency unit for a sound mixture. To capture complex, nonlinear relationship between input and output, both state and transition feature functions in CRFs are learned by deep neural networks. The formulation of the problem as classification allows us to directly optimize a measure that is well correlated with human speech intelligibility. The proposed system substantially outperforms existing ones in a variety of noises.", "title": "" }, { "docid": "5906d20bea1c95399395d045f84f11c9", "text": "Constructive interference (CI) enables concurrent transmissions to interfere non-destructively, so as to enhance network concurrency. In this paper, we propose deliberate synchronized constructive interference (Disco), which ensures concurrent transmissions of an identical packet to synchronize more precisely than traditional CI. Disco envisions concurrent transmissions to positively interfere at the receiver, and potentially allows orders of magnitude reductions in energy consumption and improvements in link quality. We also theoretically introduce a sufficient condition to construct Disco with IEEE 802.15.4 radio for the first time. Moreover, we propose Triggercast, a distributed middleware service, and show it is feasible to generate Disco on real sensor network platforms like TMote Sky. To synchronize transmissions of multiple senders at the chip level, Triggercast effectively compensates propagation and radio processing delays, and has 95th percentile synchronization errors of at most 250 ns. Triggercast also intelligently decides which co-senders to participate in simultaneous transmissions, and aligns their transmission time to maximize the overall link Packet Reception Ratio (PRR), under the condition of maximal system robustness. Extensive experiments in real testbeds demonstrate that Triggercast significantly improves PRR from 5 to 70 percent with seven concurrent senders. We also demonstrate that Triggercast provides 1.3χ PRR performance gains in average, when it is integrated with existing data forwarding protocols.", "title": "" }, { "docid": "11cce2c0dae058a7d101387f58e00e5a", "text": "It is a commonly held perception amongst biomechanists, sports medicine practitioners, baseball coaches and players, that an individual baseball player's style of throwing or pitching influences their performance and susceptibility to injury. With the results of a series of focus groups with baseball managers and pitching coaches in mind, the available scientific literature was reviewed regarding the contribution of individual aspects of pitching and throwing mechanics to potential for injury and performance. After a discussion of the limitations of kinematic and kinetic analyses, the individual aspects of pitching mechanics are discussed under arbitrary headings: Foot position at stride foot contact; Elbow flexion; Arm rotation; Arm horizontal abduction; Arm abduction; Lead knee position; Pelvic orientation; Deceleration-phase related issues; Curveballs; and Teaching throwing mechanics. In general, popular opinion of baseball coaching staff was found to be largely in concordance with the scientific investigations of biomechanists with several notable exceptions. Some difficulties are identified with the practical implementation of analyzing throwing mechanics in the field by pitching coaches, and with some unquantified aspects of scientific analyses. Key pointsBiomechanical analyses including kinematic and kinetic analyses allow for estimation of pitching performance and potential for injury.Some difficulties both theoretic and practical exist for the implementation and interpretation of such analyses.Commonly held opinions of baseball pitching authorities are largely held to concur with biomechanical analyses.Recommendations can be made regarding appropriate pitching and throwing technique in light of these investigations.", "title": "" }, { "docid": "b3ced0cf4520f44bc1fd745ae439bcf6", "text": "This paper describes the basic principles of traditional 2D hand drawn animation and their application to 3D computer animation. After describing how these principles evolved, the individual principles are detailed, addressing their meanings in 2D hand drawn animation and their application to 3D computer animation. This should demonstrate the importance of these principles to quality 3D computer animation.", "title": "" }, { "docid": "ddb0a3bc0a9367a592403d0fc0cec0a5", "text": "Fluorescence microscopy is a powerful quantitative tool for exploring regulatory networks in single cells. However, the number of molecular species that can be measured simultaneously is limited by the spectral overlap between fluorophores. Here we demonstrate a simple but general strategy to drastically increase the capacity for multiplex detection of molecules in single cells by using optical super-resolution microscopy (SRM) and combinatorial labeling. As a proof of principle, we labeled mRNAs with unique combinations of fluorophores using fluorescence in situ hybridization (FISH), and resolved the sequences and combinations of fluorophores with SRM. We measured mRNA levels of 32 genes simultaneously in single Saccharomyces cerevisiae cells. These experiments demonstrate that combinatorial labeling and super-resolution imaging of single cells is a natural approach to bring systems biology into single cells.", "title": "" }, { "docid": "19acedd03589d1fd1173dd1565d11baf", "text": "This is the first report on the microbial diversity of xaj-pitha, a rice wine fermentation starter culture through a metagenomics approach involving Illumine-based whole genome shotgun (WGS) sequencing method. Metagenomic DNA was extracted from rice wine starter culture concocted by Ahom community of Assam and analyzed using a MiSeq® System. A total of 2,78,231 contigs, with an average read length of 640.13 bp, were obtained. Data obtained from the use of several taxonomic profiling tools were compared with previously reported microbial diversity studies through the culture-dependent and culture-independent method. The microbial community revealed the existence of amylase producers, such as Rhizopus delemar, Mucor circinelloides, and Aspergillus sp. Ethanol producers viz., Meyerozyma guilliermondii, Wickerhamomyces ciferrii, Saccharomyces cerevisiae, Candida glabrata, Debaryomyces hansenii, Ogataea parapolymorpha, and Dekkera bruxellensis, were found associated with the starter culture along with a diverse range of opportunistic contaminants. The bacterial microflora was dominated by lactic acid bacteria (LAB). The most frequent occurring LAB was Lactobacillus plantarum, Lactobacillus brevis, Leuconostoc lactis, Weissella cibaria, Lactococcus lactis, Weissella para mesenteroides, Leuconostoc pseudomesenteroides, etc. Our study provided a comprehensive picture of microbial diversity associated with rice wine fermentation starter and indicated the superiority of metagenomic sequencing over previously used techniques.", "title": "" }, { "docid": "9e0267f10a27509ae735b1ade704e461", "text": "Recent advances in software testing allow automatic derivation of tests that reach almost any desired point in the source code. There is, however, a fundamental problem with the general idea of targeting one distinct test coverage goal at a time: Coverage goals are neither independent of each other, nor is test generation for any particular coverage goal guaranteed to succeed. We present EvoSuite, a search-based approach that optimizes whole test suites towards satisfying a coverage criterion, rather than generating distinct test cases directed towards distinct coverage goals. Evaluated on five open source libraries and an industrial case study, we show that EvoSuite achieves up to 18 times the coverage of a traditional approach targeting single branches, with up to 44% smaller test suites.", "title": "" }, { "docid": "62d21ddba64df488fc82e9558f2afc99", "text": "The spatial analysis of crime and the current focus on hotspots has pushed the area of crime mapping to the fore, especially in regard to high volume offences such as vehicle theft and burglary. Hotspots also have a temporal component, yet police recorded crime databases rarely record the actual time of offence as this is seldom known. Police crime data tends, more often than not, to reflect the routine activities of the victims rather than the offence patterns of the offenders. This paper demonstrates a technique that uses police START and END crime times to generate a crime occurrence probability at any given time that can be mapped or visualized graphically. A study in the eastern suburbs of Sydney, Australia, demonstrates that crime hotspots with a geographical proximity can have distinctly different temporal patterns.", "title": "" }, { "docid": "b4889cbeebb3c7688cd785322986453f", "text": "Power consumption is a critical factor for the deployment of embedded computer vision systems. We explore the use of computational cameras that directly output binary gradient images to reduce the portion of the power consumption allocated to image sensing. We survey the accuracy of binary gradient cameras on a number of computer vision tasks using deep learning. These include object recognition, head pose regression, face detection, and gesture recognition. We show that, for certain applications, accuracy can be on par or even better than what can be achieved on traditional images. We are also the first to recover intensity information from binary spatial gradient images—useful for applications with a human observer in the loop, such as surveillance. Our results, which we validate with a prototype binary gradient camera, point to the potential of gradient-based computer vision systems.", "title": "" }, { "docid": "370767f85718121dc3975f383bf99d8b", "text": "A combinatorial classification and a phylogenetic analysis of the ten 12/8 time, seven-stroke bell rhythm timelines in African and Afro-American music are presented. New methods for rhythm classification are proposed based on measures of rhythmic oddity and off-beatness. These combinatorial classifications reveal several new uniqueness properties of the Bembé bell pattern that may explain its widespread popularity and preference among the other patterns in this class. A new distance measure called the swap-distance is introduced to measure the non-similarity of two rhythms that have the same number of strokes. A swap in a sequence of notes and rests of equal duration is the location interchange of a note and a rest that are adjacent in the sequence. The swap distance between two rhythms is defined as the minimum number of swaps required to transform one rhythm to the other. A phylogenetic analysis using Splits Graphs with the swap distance shows that each of the ten bell patterns can be derived from one of two “canonical” patterns with at most four swap operations, or from one with at most five swap operations. Furthermore, the phylogenetic analysis suggests that for these ten bell patterns there are no “ancestral” rhythms not contained in this set.", "title": "" }, { "docid": "3fa63b98358afe9b16f983a4b3019ec4", "text": "In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human-robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human-robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.", "title": "" } ]
scidocsrr
cf1485f1638b2550568bef32bacf2004
Forensic Triage for Mobile Phones with DEC0DE
[ { "docid": "fc79bfdb7fbbfa42d2e1614964113101", "text": "Probability Theory, 2nd ed. Princeton, N. J.: 960. Van Nostrand, 1 121 T. T. Kadota, “Optimum reception of binary gaussian signals,” Bell Sys. Tech. J., vol. 43, pp. 2767-2810, November 1964. 131 T. T. Kadota. “Ootrmum recention of binarv sure and Gaussian signals,” Bell Sys. ?‘ech: J., vol. 44;~~. 1621-1658, October 1965. 141 U. Grenander, ‘Stochastic processes and statistical inference,” Arkiv fiir Matematik, vol. 17, pp. 195-277, 1950. 151 L. A. Zadeh and J. R. Ragazzini, “Optimum filters for the detection of signals in noise,” Proc. IRE, vol. 40, pp. 1223-1231, O,+nhm 1 a.63 161 J. H. Laning and R. H. Battin, Random Processes in Automatic Control. New York: McGraw-Hill. 1956. nn. 269-358. 171 C.. W. Helstrom, “ Solution of the dete&on integral equation for stationary filtered white noise,” IEEE Trans. on Information Theory, vol. IT-II, pp. 335-339, July 1965. 181 T. Kailath, “The detection of known signals in colored Gaussian noise,” Stanford Electronics Labs., Stanford Univ., Stanford, Calif. Tech. Rept. 7050-4, July 1965. 191 T. T. Kadota, “Optimum reception of nf-ary Gaussian signals in Gaussian noise,” Bell. Sys. Tech. J., vol. 44, pp. 2187-2197, November 1965. [lOI T. T. Kadota, “Term-by-term differentiability of Mercer’s expansion,” Proc. of Am. Math. Sot., vol. 18, pp. 69-72, February 1967.", "title": "" }, { "docid": "e602cb626418ff3dbb38fd171bfd359e", "text": "File carving is an important technique for digital forensics investigation and for simple data recovery. By using a database of headers and footers (essentially, strings of bytes at predictable offsets) for specific file types, file carvers can retrieve files from raw disk images, regardless of the type of filesystem on the disk image. Perhaps more importantly, file carving is possible even if the filesystem metadata has been destroyed. This paper presents some requirements for high performance file carving, derived during design and implementation of Scalpel, a new open source file carving application. Scalpel runs on machines with only modest resources and performs carving operations very rapidly, outperforming most, perhaps all, of the current generation of carving tools. The results of a number of experiments are presented to support this assertion.", "title": "" } ]
[ { "docid": "2f1caa8b2c83d7581343bd29cc6f898d", "text": "Sequencing ribosomal RNA (rRNA) genes is currently the method of choice for phylogenetic reconstruction, nucleic acid based detection and quantification of microbial diversity. The ARB software suite with its corresponding rRNA datasets has been accepted by researchers worldwide as a standard tool for large scale rRNA analysis. However, the rapid increase of publicly available rRNA sequence data has recently hampered the maintenance of comprehensive and curated rRNA knowledge databases. A new system, SILVA (from Latin silva, forest), was implemented to provide a central comprehensive web resource for up to date, quality controlled databases of aligned rRNA sequences from the Bacteria, Archaea and Eukarya domains. All sequences are checked for anomalies, carry a rich set of sequence associated contextual information, have multiple taxonomic classifications, and the latest validly described nomenclature. Furthermore, two precompiled sequence datasets compatible with ARB are offered for download on the SILVA website: (i) the reference (Ref) datasets, comprising only high quality, nearly full length sequences suitable for in-depth phylogenetic analysis and probe design and (ii) the comprehensive Parc datasets with all publicly available rRNA sequences longer than 300 nucleotides suitable for biodiversity analyses. The latest publicly available database release 91 (August 2007) hosts 547 521 sequences split into 461 823 small subunit and 85 689 large subunit rRNAs.", "title": "" }, { "docid": "772d1e7115f6b8570e07b7f9ade527a9", "text": "We consider the control of interacting subsystems whose dynamics and constraints are decoupled, but whose state vectors are coupled non-separably in a single cost function of a finite horizon optimal control problem. For a given cost structure, we generate distributed optimal control problems for each subsystem and establish that a distributed receding horizon control implementation is stabilizing to a neighborhood of the objective state. The implementation requires synchronous updates and the exchange of the most recent optimal control trajectory between coupled subsystems prior to each update. The key requirements for stability are that each subsystem not deviate too far from the previous open-loop state trajectory, and that the receding horizon updates happen sufficiently fast. The venue of multi-vehicle formation stabilization is used to demonstrate the distributed implementation.", "title": "" }, { "docid": "040dad70c098f6ab921569fd41574b6f", "text": "In this paper we generalise the sentence compression task. Rather than simply shorten a sentence by deleting words or constituents, as in previous work, we rewrite it using additional operations such as substitution, reordering, and insertion. We present a new corpus that is suited to our task and a discriminative tree-totree transduction model that can naturally account for structural and lexical mismatches. The model incorporates a novel grammar extraction method, uses a language model for coherent output, and can be easily tuned to a wide range of compression specific loss functions.", "title": "" }, { "docid": "73e27f751c8027bac694f2e876d4d910", "text": "The numerous and diverse applications of the Internet of Things (IoT) have the potential to change all areas of daily life of individuals, businesses, and society as a whole. The vision of a pervasive IoT spans a wide range of application domains and addresses the enabling technologies needed to meet the performance requirements of various IoT applications. In order to accomplish this vision, this paper aims to provide an analysis of literature in order to propose a new classification of IoT applications, specify and prioritize performance requirements of such IoT application classes, and give an insight into state-of-the-art technologies used to meet these requirements, all from telco’s perspective. A deep and comprehensive understanding of the scope and classification of IoT applications is an essential precondition for determining their performance requirements with the overall goal of defining the enabling technologies towards fifth generation (5G) networks, while avoiding over-specification and high costs. Given the fact that this paper presents an overview of current research for the given topic, it also targets the research community and other stakeholders interested in this contemporary and attractive field for the purpose of recognizing research gaps and recommending new research directions.", "title": "" }, { "docid": "20adf89d9301cdaf64d8bf684886de92", "text": "A standard planar Kernel Density Estimation (KDE) aims to produce a smooth density surface of spatial point events over a 2-D geographic space. However the planar KDE may not be suited for characterizing certain point events, such as traffic accidents, which usually occur inside a 1-D linear space, the roadway network. This paper presents a novel network KDE approach to estimating the density of such spatial point events. One key feature of the new approach is that the network space is represented with basic linear units of equal network length, termed lixel (linear pixel), and related network topology. The use of lixel not only facilitates the systematic selection of a set of regularly spaced locations along a network for density estimation, but also makes the practical application of the network KDE feasible by significantly improving the computation efficiency. The approach is implemented in the ESRI ArcGIS environment and tested with the year 2005 traffic accident data and a road network in the Bowling Green, Kentucky area. The test results indicate that the new network KDE is more appropriate than standard planar KDE for density estimation of traffic accidents, since the latter covers space beyond the event context (network space) and is likely to overestimate the density values. The study also investigates the impacts on density calculation from two kernel functions, lixel lengths, and search bandwidths. It is found that the kernel function is least important in structuring the density pattern over network space, whereas the lixel length critically impacts the local variation details of the spatial density pattern. The search bandwidth imposes the highest influence by controlling the smoothness of the spatial pattern, showing local effects at a narrow bandwidth and revealing \" hot spots \" at larger or global scales with a wider bandwidth. More significantly, the idea of representing a linear network by a network system of equal-length lixels may potentially 3 lead the way to developing a suite of other network related spatial analysis and modeling methods.", "title": "" }, { "docid": "aa7b187adf8478465e580e43730e9d40", "text": "Vehicle detection in aerial images, being an interesting but challenging problem, plays an important role for a wide range of applications. Traditional methods are based on sliding-window search and handcrafted or shallow-learning-based features with heavy computational costs and limited representation power. Recently, deep learning algorithms, especially region-based convolutional neural networks (R-CNNs), have achieved state-of-the-art detection performance in computer vision. However, several challenges limit the applications of R-CNNs in vehicle detection from aerial images: 1) vehicles in large-scale aerial images are relatively small in size, and R-CNNs have poor localization performance with small objects; 2) R-CNNs are particularly designed for detecting the bounding box of the targets without extracting attributes; 3) manual annotation is generally expensive and the available manual annotation of vehicles for training R-CNNs are not sufficient in number. To address these problems, this paper proposes a fast and accurate vehicle detection framework. On one hand, to accurately extract vehicle-like targets, we developed an accurate-vehicle-proposal-network (AVPN) based on hyper feature map which combines hierarchical feature maps that are more accurate for small object detection. On the other hand, we propose a coupled R-CNN method, which combines an AVPN and a vehicle attribute learning network to extract the vehicle's location and attributes simultaneously. For original large-scale aerial images with limited manual annotations, we use cropped image blocks for training with data augmentation to avoid overfitting. Comprehensive evaluations on the public Munich vehicle dataset and the collected vehicle dataset demonstrate the accuracy and effectiveness of the proposed method.", "title": "" }, { "docid": "8410b16756a9049c2f57f94658b4d5e3", "text": "Empirical studies, often in the form of controlled experiments, have been widely adopted in software engineering research as a way to evaluate the merits of new software engineering tools. However, controlled experiments involving human participants actually using new tools are still rare, and when they are conducted, some have serious validity concerns. Recent research has also shown that many software engineering researchers view this form of tool evaluation as too risky and too difficult to conduct, as they might ultimately lead to inconclusive or negative results. In this paper, we aim both to help researchers minimize the risks of this form of tool evaluation, and to increase their quality, by offering practical methodological guidance on designing and running controlled experiments with developers. Our guidance fills gaps in the empirical literature by explaining, from a practical perspective, options in the recruitment and selection of human participants, informed consent, experimental procedures, demographic measurements, group assignment, training, the selecting and design of tasks, the measurement of common outcome variables such as success and time on task, and study debriefing. Throughout, we situate this guidance in the results of a new systematic review of the tool evaluations that were published in over 1,700 software engineering papers published from 2001 to 2011.", "title": "" }, { "docid": "33ce6e07bc4031f1b915e32769d5c984", "text": "MOTIVATION\nDIYABC is a software package for a comprehensive analysis of population history using approximate Bayesian computation on DNA polymorphism data. Version 2.0 implements a number of new features and analytical methods. It allows (i) the analysis of single nucleotide polymorphism data at large number of loci, apart from microsatellite and DNA sequence data, (ii) efficient Bayesian model choice using linear discriminant analysis on summary statistics and (iii) the serial launching of multiple post-processing analyses. DIYABC v2.0 also includes a user-friendly graphical interface with various new options. It can be run on three operating systems: GNU/Linux, Microsoft Windows and Apple Os X.\n\n\nAVAILABILITY\nFreely available with a detailed notice document and example projects to academic users at http://www1.montpellier.inra.fr/CBGP/diyabc CONTACT: estoup@supagro.inra.fr Supplementary information: Supplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "4596da5305ce150406a8f46a621620a6", "text": "Incorporating prior knowledge like lexical constraints into the model’s output to generate meaningful and coherent sentences has many applications in dialogue system, machine translation, image captioning, etc. However, existing RNN-based models incrementally generate sentences from left to right via beam search, which makes it difficult to directly introduce lexical constraints into the generated sentences. In this paper, we propose a new algorithmic framework, dubbed BFGAN, to address this challenge. Specifically, we employ a backward generator and a forward generator to generate lexically constrained sentences together, and use a discriminator to guide the joint training of two generators by assigning them reward signals. Due to the difficulty of BFGAN training, we propose several training techniques to make the training process more stable and efficient. Our extensive experiments on two large-scale datasets with human evaluation demonstrate that BFGAN has significant improvements over previous methods.", "title": "" }, { "docid": "c3365370cdbf4afe955667f575d1fbb6", "text": "One of the overriding interests of the literature on health care economics is to discover where personal choice in market economies end and corrective government intervention should begin. Our study addresses this question in the context of John Stuart Mill's utilitarian principle of harm. Our primary objective is to determine whether public policy interventions concerning more than 35,000 online pharmacies worldwide are necessary and efficient compared to traditional market-oriented approaches. Secondly, we seek to determine whether government interference could enhance personal  utility maximization, despite its direct and indirect (unintended) costs on medical e-commerce. This study finds that containing the negative externalities of medical e-commerce provides the most compelling raison d'etre of government interference. It asserts that autonomy and paternalism need not be mutually exclusive, despite their direct and indirect consequences on individual choice and decision-making processes. Valuable insights derived from Mill's principle should enrich theory-building in health care economics and policy.", "title": "" }, { "docid": "3feb565be1dc3439fd2fdf6b0e25d65b", "text": "Previous research demonstrated that a single amnesic patient could acquire complex knowledge and processes required for the performance of a computer data-entry task. The present study extends the earlier work to a larger group of brain-damaged patients with memory disorders of varying severity and of various etiologies and with other accompanying cognitive deficits. All patients were able to learn both the data-entry procedures and the factual information associated with the task. Declarative knowledge was acquired by patients at a much slower rate than normal whereas procedural learning proceeded at approximately the same rate in patients and control subjects. Patients also showed evidence of transfer of declarative knowledge to the procedural task, as well as transfer of the data-entry procedures across changes in materials.", "title": "" }, { "docid": "cb8b8e5a5e3aeb529ac836872554ca62", "text": "In this paper, we propose to use Convolutional Restricted Boltzmann Machine (ConvRBM) to learn filterbank from the raw audio signals. ConvRBM is a generative model trained in an unsupervised way to model the audio signals of arbitrary lengths. ConvRBM is trained using annealed dropout technique and parameters are optimized using Adam optimization. The subband filters of ConvRBM learned from the ESC-50 database resemble Fourier basis in the mid-frequency range while some of the low-frequency subband filters resemble Gammatone basis. The auditory-like filterbank scale is nonlinear w.r.t. the center frequencies of the subband filters and follows the standard auditory scales. We have used our proposed model as a front-end for the Environmental Sound Classification (ESC) task with supervised Convolutional Neural Network (CNN) as a back-end. Using CNN classifier, the ConvRBM filterbank (ConvRBMBANK) and its score-level fusion with the Mel filterbank energies (FBEs) gave an absolute improvement of 10.65 %, and 18.70 % in the classification accuracy, respectively, over FBEs alone on the ESC-50 database. This shows that the proposed ConvRBM filterbank also contains highly complementary information over the Mel filterbank, which is helpful in the ESC task.", "title": "" }, { "docid": "db0e61e6988106203f6780023ba6902b", "text": "In first stage of each microwave receiver there is Low Noise Amplifier (LNA) circuit, and this stage has important rule in quality factor of the receiver. The design of a LNA in Radio Frequency (RF) circuit requires the trade-off many importance characteristics such as gain, Noise Figure (NF), stability, power consumption and complexity. This situation Forces desingners to make choices in the desing of RF circuits. In this paper the aim is to design and simulate a single stage LNA circuit with high gain and low noise using MESFET for frequency range of 5 GHz to 6 GHz. The desing simulation process is down using Advance Design System (ADS). A single stage LNA has successfully designed with 15.83 dB forward gain and 1.26 dB noise figure in frequency of 5.3 GHz. Also the designed LNA should be working stably In a frequency range of 5 GHz to 6 GHz. Keywords—Advance Design System, Low Noise Amplifier, Radio Frequency, Noise Figure.", "title": "" }, { "docid": "30a14fac7ac6c2515367b11952b0a6d7", "text": "This paper proposes an inductive coupled wireless power transfer (WPT) system with class-E2 dc-dc converter along with its design procedure. The proposed WPT system can achieve high power-conversion efficiency at high frequencies because it satisfies the class-E zero-voltage switching and zero-derivative-voltage switching conditions on both the inverter and the rectifier. By using the class-E inverter as a transmitter and the class-E rectifier as a receiver, high power-delivery efficiency can be achieved in the designed WPT system. By using a numerical design procedure proposed in the previous work, it is possible to design the WPT system without considering the impedance matching for satisfying the class-E ZVS/ZDS conditions. The experimental results of the design example showed the overall efficiency of 85.1 % at 100 W output power and 200 kHz operating frequency.", "title": "" }, { "docid": "c7604aae88fdfed9db385e49da5adc9c", "text": "This paper presents our recent work that aims at associating the recognition results of textual and graphical information contained in the scientific chart images. Text components are first located in the input image and then recognized using OCR. On the other hand, the graphical objects are segmented and form high level symbols. Both logical and semantic correspondence between text and graphical symbols are identified. The association of text and graphics allows us to capture the semantic meaning carried by scientific chart images in a more complete way. The result of scientific chart image understanding is presented using XML documents.", "title": "" }, { "docid": "b987b231b1f8e3013c956dc5f0c33fdb", "text": "Context As autonomous driving technology matures towards series production, it is necessary to take a deeper look at various aspects of electrical/electronic (E/E) architectures for autonomous driving. Objective This paper describes a functional reference architecture for autonomous driving, along with various considerations that influence such an architecture. The functionality is described at the logical level, without dependence on specific implementation technologies. Method Engineering design has been used as the research method, which focuses on creating solutions intended for practical application. The architecture has been refined and applied over a five year period to the construction of prototype autonomous vehicles in three different categories, with both academic and industrial stakeholders. Results The architectural components are divided into categories pertaining to (i) perception, (ii) decision and control, and (iii) vehicle platform manipulation. The architecture itself is divided into two layers comprising the vehicle platform and a cognitive driving intelligence. The distribution of components among the architectural layers considers two extremes: one where the vehicle platform is as \"dumb\" as possible, and the other, where the vehicle platform can be treated as an autonomous system with limited intelligence. We recommend a clean split between the driving intelligence and the vehicle platform. The architecture description includes identification of stakeholder concerns, which are grouped under the business and engineering cate-", "title": "" }, { "docid": "6d3e19c44f7af5023ef991b722b078c5", "text": "Volatile substances are commonly misused with easy-to-obtain commercial products, such as glue, shoe polish, nail polish remover, butane lighter fluid, gasoline and computer duster spray. This report describes a case of sudden death of a 29-year-old woman after presumably inhaling gas cartridge butane from a plastic bag. Autopsy, pathological and toxicological analyses were performed in order to determine the cause of death. Pulmonary edema was observed pathologically, and the toxicological study revealed 2.1μL/mL of butane from the blood. The causes of death from inhalation of volatile substances have been explained by four mechanisms; cardiac arrhythmia, anoxia, respiratory depression, and vagal inhibition. In this case, the cause of death was determined to be asphyxia from anoxia. Additionally, we have gathered fatal butane inhalation cases with quantitative analyses of butane concentrations, and reviewed other reports describing volatile substance abuse worldwide.", "title": "" }, { "docid": "4f5195fedde1c94cfba4c33c633268e1", "text": "This paper investigates the evaluation of dense 3D face reconstruction from a single 2D image in the wild. To this end, we organise a competition that provides a new benchmark dataset that contains 2000 2D facial images of 135 subjects as well as their 3D ground truth face scans. In contrast to previous competitions or challenges, the aim of this new benchmark dataset is to evaluate the accuracy of a 3D dense face reconstruction algorithm using real, accurate and high-resolution 3D ground truth face scans. In addition to the dataset, we provide a standard protocol as well as a Python script for the evaluation. Last, we report the results obtained by three state-of-the-art 3D face reconstruction systems on the new benchmark dataset. The competition is organised along with the 2018 13th IEEE Conference on Automatic Face & Gesture Recognition.", "title": "" }, { "docid": "6b203b7a8958103b30701ac139eb1fb8", "text": "Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.", "title": "" }, { "docid": "058ca337a484d557869e08c2b47d79e9", "text": "The role of inflammation in carcinogenesis has been extensively investigated and well documented. Many biochemical processes that are altered during chronic inflammation have been implicated in tumorigenesis. These include shifting cellular redox balance toward oxidative stress; induction of genomic instability; increased DNA damage; stimulation of cell proliferation, metastasis, and angiogenesis; deregulation of cellular epigenetic control of gene expression; and inappropriate epithelial-to-mesenchymal transition. A wide array of proinflammatory cytokines, prostaglandins, nitric oxide, and matricellular proteins are closely involved in premalignant and malignant conversion of cells in a background of chronic inflammation. Inappropriate transcription of genes encoding inflammatory mediators, survival factors, and angiogenic and metastatic proteins is the key molecular event in linking inflammation and cancer. Aberrant cell signaling pathways comprising various kinases and their downstream transcription factors have been identified as the major contributors in abnormal gene expression associated with inflammation-driven carcinogenesis. The posttranscriptional regulation of gene expression by microRNAs also provides the molecular basis for linking inflammation to cancer. This review highlights the multifaceted role of inflammation in carcinogenesis in the context of altered cellular redox signaling.", "title": "" } ]
scidocsrr
626a603d651a67ba90ba1e75c92730b1
Variational Attention for Sequence-to-Sequence Models
[ { "docid": "5288f4bbc2c9b8531042ce25b8df05b0", "text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.", "title": "" }, { "docid": "355d040cf7dd706f08ef4ce33d53a333", "text": "Conversational participants tend to immediately and unconsciously adapt to each other’s language styles: a speaker will even adjust the number of articles and other function words in their next utterance in response to the number in their partner’s immediately preceding utterance. This striking level of coordination is thought to have arisen as a way to achieve social goals, such as gaining approval or emphasizing difference in status. But has the adaptation mechanism become so deeply embedded in the language-generation process as to become a reflex? We argue that fictional dialogs offer a way to study this question, since authors create the conversations but don’t receive the social benefits (rather, the imagined characters do). Indeed, we find significant coordination across many families of function words in our large movie-script corpus. We also report suggestive preliminary findings on the effects of gender and other features; e.g., surprisingly, for articles, on average, characters adapt more to females than to males.", "title": "" } ]
[ { "docid": "3ec6ee23de04b31e02894d3d0361716f", "text": "The thermometer code-to-binary code encoder has become the bottleneck of the ultra-high speed flash ADCs. In this paper, the authors presented the fat tree thermometer codeto-binary code encoder that is highly suitable for the ultrahigh speed flash ADCs. The simulation and the implementation results show that the fat tree encoder outperforms the commonly used ROM encoder in terms of speed and power for the 6 bit CMOS flash ADC case. The speed is improved by almost a factor of 2 when using the fat tree encoder, which in fact demonstrates the fat tree encoder is an effective solution for the bottleneck problem in ultra-high speed ADCs.", "title": "" }, { "docid": "8edc5388549c89bb9cd7440f3e53f1a3", "text": "Linear models for control and motion generation of humanoid robots have received significant attention in the past years, not only due to their well known theoretical guarantees, but also because of practical computational advantages. However, to tackle more challenging tasks and scenarios such as locomotion on uneven terrain, a more expressive model is required. In this paper, we are interested in contact interaction-centered motion optimization based on the momentum dynamics model. This model is non-linear and non-convex; however, we find a relaxation of the problem that allows us to formulate it as a single convex quadratically-constrained quadratic program (QCQP) that can be very efficiently optimized and is useful for multi-contact planning. This convex model is then coupled to the optimization of end-effector contact locations using a mixed integer program, which can also be efficiently solved. This becomes relevant e.g. to recover from external pushes, where a predefined stepping plan is likely to fail and an online adaptation of the contact location is needed. The performance of our algorithm is demonstrated in several multi-contact scenarios for a humanoid robot.", "title": "" }, { "docid": "57c81eb0f559ea1c10747b5ecae14c67", "text": "OBJECTIVE\nAutism spectrum disorder (ASD) is associated with amplified emotional responses and poor emotional control, but little is known about the underlying mechanisms. This article provides a conceptual and methodologic framework for understanding compromised emotion regulation (ER) in ASD.\n\n\nMETHOD\nAfter defining ER and related constructs, methods to study ER were reviewed with special consideration on how to apply these approaches to ASD. Against the backdrop of cognitive characteristics in ASD and existing ER theories, available research was examined to identify likely contributors to emotional dysregulation in ASD.\n\n\nRESULTS\nLittle is currently known about ER in youth with ASD. Some mechanisms that contribute to poor ER in ASD may be shared with other clinical populations (e.g., physiologic arousal, degree of negative and positive affect, alterations in the amygdala and prefrontal cortex), whereas other mechanisms may be more unique to ASD (e.g., differences in information processing/perception, cognitive factors [e.g., rigidity], less goal-directed behavior and more disorganized emotion in ASD).\n\n\nCONCLUSIONS\nAlthough assignment of concomitant psychiatric diagnoses is warranted in some cases, poor ER may be inherent in ASD and may provide a more parsimonious conceptualization for the many associated socioemotional and behavioral problems in this population. Further study of ER in youth with ASD may identify meaningful subgroups of patients and lead to more effective individualized treatments.", "title": "" }, { "docid": "93a49a164437d3cc266d8e859f2bb265", "text": "...................................................................................................................................................4", "title": "" }, { "docid": "9aa91978651f42157b42a55b936a9bc0", "text": "Suicide, the eighth leading cause of death in the United States, accounts for more than 30 000 deaths per year. The total number of suicides has changed little over time. For example, 27 596 U.S. suicides occurred in 1981, and 30 575 occurred in 1998. Between 1981 and 1998, the age-adjusted suicide rate decreased by 9.3%from 11.49 suicides per 100 000 persons to 10.42 suicides per 100 000 persons (www.cdc.gov/ncipc/wisqars). The suicide rate in men (18.7 suicides per 100 000 men in 1998) is more than four times that in women (4.4 suicides per 100 000 women in 1998). In females, suicide rates remain relatively constant beginning in the midteens. In males, suicide rates are stable from the late teenage years until the late 70s, when the rate increases substantially (to 41 suicides per 100 000 persons annually in men 75 to 84 years of age). White men have a twofold higher risk for suicide compared with African-American men (20.2 vs. 10.9 suicides, respectively, each year per 100 000 men). The risk in white women is double that of women in U.S. nonwhite ethnic/racial minority groups (4.9 vs. 2.4 per 100 000 women each year). In countries other than the United States, the most recently reported rates of suicide vary widely, ranging from less than 1 in 100 000 persons per year in Syria, Egypt, and Lebanon to more than 40 in 100 000 persons per year in many former Soviet republics (www.who.int/whosis). Over the past century, Hungary had the world's highest reported rate of suicide; the reason is unknown. Of note, the reported rates of suicide in first-generation immigrants to Australia tend to be more similar to rates in their native country than to rates in their country of current residence (1, 2); these figures indicate the influence of culture and ethnicity on suicide rates. Suicide is the third leading cause of death in persons 15 to 34 years of age. The U.S. suicide rate in all youths decreased by 18% from 1990 to 1998 (www.cdc.gov/ncipc/wisqars) despite a 3.6-fold increase from 1992 to 1995 in white men, a 4.7-fold increase in African-American men, and a 2.1-fold increase in African-American women. Worldwide, from 1950 to 1995 in persons of all ages, suicide rates increased by approximately 35% in men and only approximately 10% in women (www.who.int/whosis). The reasons for the differences in rates among age, sex, and ethnic groups and the change in rates since the 1950s are unknown. Suicide is generally a complication of a psychiatric disorder. More than 90% of suicide victims have a diagnosable psychiatric illness (3-7), and most persons who attempt suicide have a psychiatric disorder. The most common psychiatric conditions associated with suicide or serious suicide attempt are mood disorders (3-8). Investigators have proposed many models to explain or predict suicide (9). One such explanatory and predictive model is the stress-diathesis model (10). One stressor is almost invariably the onset or acute worsening of a psychiatric disorder, but other types of stressors, such as a psychosocial crisis, can also contribute. The diathesis for suicidal behavior includes a combination of factors, such as sex, religion, familial and genetic components, childhood experiences, psychosocial support system, availability of highly lethal suicide methods, and various other factors, including cholesterol level. In this review, I describe the neurobiological correlates of the stressors and the diathesis. Literature for this review came from searches of the MEDLINE database (1996 to the present) and from literature cited in review articles. The factors that determined inclusion in this review were superiority of research design (use of psychiatric controls, quality of psychometrics, diagnostic information on the study sample, and definition of suicidal behavior; prospective studies were favored), adequate representation of major points of view, and pivotal reviews of key subjects. What is Suicidal Behavior? Suicidal behavior refers to the most clear-cut and unambiguous act of completed suicide but also includes a heterogeneous spectrum of suicide attempts that range from highly lethal attempts (in which survival is the result of good fortune) to low-lethality attempts that occur in the context of a social crisis and contain a strong element of an appeal for help (11). Suicidal ideation without action is more common than suicidal behavior (11). In most countries, men have a higher reported rate of completed suicide, whereas women have a higher rate of attempted suicide (12). Men tend to use means that are more lethal, plan the suicide attempt more carefully, and avoid detection. In contrast, women tend to use less lethal means of suicide, which carry a higher chance of survival, and they more commonly express an appeal for help by conducting the attempt in a manner that favors discovery and rescue (13, 14). Thus, suicidal behavior has two dimensions (13). The first dimension is the degree of medical lethality or damage resulting from the suicide attempt. The second dimension relates to suicidal intent and measures the degree of preparation, the desire to die versus the desire to live, and the chances of discovery. Intent and lethality are correlated with each other and with biological abnormalities associated with suicide risk (13, 15, 16). The clinical profiles of suicide attempts and completions overlap (17). Suicide attempters who survive very lethal attempts, which are known as failed suicides, have the same clinical and psychosocial profile as suicide completers (11, 17). The study and prevention of failed suicides are probably most relevant to completed suicides. Somewhat related to suicide attempters are patients with serious medical illnesses who do not adhere to treatment regimensfor example, diabetic patients who do not take prescribed medications to control blood sugar levels or persons who engage in high-risk behaviors, such as sky diving or mountaineering. These groups warrant further study to determine whether they have psychopathology that overlaps with the psychopathology of suicide attempters. Intent and lethality are also related to the risk for future completed suicide (13). Subsequent suicide attempts may involve a greater degree of intent and lethality (18), and a previous suicide attempt is an important predictor of future suicide (19, 20) or suicide attempt (21). Careful inquiry about past suicide attempts is an essential part of risk assessment in psychiatric patients. Because more than two thirds of suicides occur with the first attempt, history of a suicide attempt is insufficient to predict most suicides; additional risk factors must be considered. Clinical Correlates of Suicidal Behavior Psychological autopsy studies involve review of all available medical records and interviews with family members and friends of the deceased. This method generates valid psychiatric diagnoses (22), and most studies have found that more than 90% of suicide victims had a psychiatric disorder at the time of suicide (3-6, 23). That percentage may be underestimated because accurate data depend on finding informants who knew the victim's state of mind in the weeks before death. Approximately 60% of all suicides occur in persons with a mood disorder (3, 4, 6, 7), and the rest occur in persons with various other psychiatric conditions, including schizophrenia; alcoholism (24); substance abuse (5, 25, 26); and personality disorders (27), such as borderline or antisocial personality disorder (23, 28-30). Lifetime mortality from suicide in discharged hospital populations is approximately 20% in persons with bipolar disorder (manic depression), 15% in persons with unipolar depression, 10% in persons with schizophrenia, 18% in persons with alcoholism, and 5% to 10% in persons with both borderline and antisocial personality disorders (29-33). These personality disorders are characterized by emotional liability, aggression, and impulsivity. The lifetime mortality due to suicide is lower in general psychiatric populations (34, 35). Although suicide is generally a complication of a psychiatric disorder, most persons with a psychiatric disorder never attempt suicide. Even the higher-risk groups, such as persons with unipolar or bipolar mood disorders, have a lifetime suicide attempt rate less than 50%. Thus, persons with these psychiatric disorders who die by suicide differ from those who never attempt suicide. To understand those differences, investigators have compared persons who have attempted suicide and those who have not by matching psychiatric diagnosis and comparable objective severity of illness (10). Suicide attempters differ in two important ways from nonattempters with the same psychiatric disorder. First, they experience more subjective depression and hopelessness and, in particular, have more severe suicidal ideation. They also perceive fewer reasons for living despite having the same objective severity of psychiatric illness and a similar number of adverse life events. One possible explanation for the greater sense of hopelessness and greater number of suicidal ideations is a predisposition for such feelings in the face of illness or other life stressor. The pressure of greater lifetime aggressivity and impulsivity suggests a second diathesis element in suicidal patients. These individuals not only are more aggressive toward others and their environment but are more impulsive in other ways that involve, for example, relationships or personal decisions about a job or purchases. A propensity for more severe suicidal ideation and a greater likelihood of acting on powerful feelings combine to place some patients at greater risk for suicide attempts than others. For clinicians, important indicators of such a diathesis are a history of a suicide attempt, which indicates the presence of a diathesis for suicidal behavior, and a family history of suicidal behavior. Suicidal behavior is known to be transmitted within families, ", "title": "" }, { "docid": "a380ee9ea523d1a3a09afcf2fb01a70d", "text": "Back-translation has become a commonly employed heuristic for semi-supervised neural machine translation. The technique is both straightforward to apply and has led to stateof-the-art results. In this work, we offer a principled interpretation of back-translation as approximate inference in a generative model of bitext and show how the standard implementation of back-translation corresponds to a single iteration of the wake-sleep algorithm in our proposed model. Moreover, this interpretation suggests a natural iterative generalization, which we demonstrate leads to further improvement of up to 1.6 BLEU.", "title": "" }, { "docid": "c68dac8613bfd8984045c95a92211bc3", "text": "This paper analyses alternative techniques for deploying low-cost human resources for data acquisition for classifier induction in domains exhibiting extreme class imbalance - where traditional labeling strategies, such as active learning, can be ineffective. Consider the problem of building classifiers to help brands control the content adjacent to their on-line advertisements. Although frequent enough to worry advertisers, objectionable categories are rare in the distribution of impressions encountered by most on-line advertisers - so rare that traditional sampling techniques do not find enough positive examples to train effective models. An alternative way to deploy human resources for training-data acquisition is to have them \"guide\" the learning by searching explicitly for training examples of each class. We show that under extreme skew, even basic techniques for guided learning completely dominate smart (active) strategies for applying human resources to select cases for labeling. Therefore, it is critical to consider the relative cost of search versus labeling, and we demonstrate the tradeoffs for different relative costs. We show that in cost/skew settings where the choice between search and active labeling is equivocal, a hybrid strategy can combine the benefits.", "title": "" }, { "docid": "359d76f0b4f758c3a58e886e840c5361", "text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-", "title": "" }, { "docid": "509f71d704e5e721642cc18eebd240c0", "text": "This paper presents an approach to the lane recognition using on-vehicle LIDAR. It detests the objects by 2D scanning and collects the range and reflectivity data in each scanning direction. We developed the lane recognition algorithm with these data, in which the lane curvature, yaw angle and offset are calculated by using the Hough transformation, and the lane width is calculated by statistical procedure. Next the lane marks are tracked by the extended Kalman filter. Then we test the performance of the lane recognition and the good results are achieved. Finally, we show the result of the road environment recognition applying the lane recognition by LIDAR", "title": "" }, { "docid": "3d6c42b7d5e7e440f2ca7fd4474c68df", "text": "This paper discusses the optimization of a stretchable electrical interconnection between integrated circuits in terms of stretchability and fatigue lifetime. The interconnection is based on Cu stripes embedded in a polyimide-enhanced (PI-enhanced) layer. Design-of-experiment (DOE) methods and finite-element modeling were used to obtain an optimal design and to define design guidelines, concerning both stripe and layer dimensions and material selection. Stretchable interconnects with a PI-enhanced layer were fabricated based on the optimized design parameters and tested. In situ experimental observations did validate the optimal design. Statistical analysis indicated that the PI width plays the most important role among the different design parameters. By increasing the PI width, the plastic strain in the Cu stripes is reduced, and thus, the stretchability and fatigue lifetime of the system is increased. The experimental results demonstrate that the PI-enhanced stretchable interconnect enables elongations up to 250% without Cu rupture. This maximum elongation is two times larger than the one in samples without PI enhancement . Moreover, the fatigue life at 30% elongation is 470 times higher.", "title": "" }, { "docid": "382ee4c7c870f9d05dee5546a664c553", "text": "Models based on the bivariate Poisson distribution are used for modelling sports data. Independent Poisson distributions are usually adopted to model the number of goals of two competing teams. We replace the independence assumption by considering a bivariate Poisson model and its extensions. The models proposed allow for correlation between the two scores, which is a plausible assumption in sports with two opposing teams competing against each other. The effect of introducing even slight correlation is discussed. Using just a bivariate Poisson distribution can improve model fit and prediction of the number of draws in football games.The model is extended by considering an inflation factor for diagonal terms in the bivariate joint distribution.This inflation improves in precision the estimation of draws and, at the same time, allows for overdispersed, relative to the simple Poisson distribution, marginal distributions. The properties of the models proposed as well as interpretation and estimation procedures are provided. An illustration of the models is presented by using data sets from football and water-polo.", "title": "" }, { "docid": "a427c3c0bcbfa10ce9ec1e7477697abe", "text": "We present a system for real-time general object recognition (gor) for indoor robot in complex scenes. A point cloud image containing the object to be recognized from a Kinect sensor, for general object at will, must be extracted a point cloud model of the object with the Cluster Extraction method, and then we can compute the global features of the object model, making up the model database after processing many frame images. Here the global feature we used is Clustered Viewpoint Feature Histogram (CVFH) feature from Point Cloud Library (PCL). For real-time gor we must preprocess all the point cloud images streamed from the Kinect into clusters based on a clustering threshold and the min-max cluster sizes related to the size of the model, for reducing the amount of the clusters and improving the processing speed, and also compute the CVFH features of the clusters. For every cluster of a frame image, we search the several nearer features from the model database with the KNN method in the feature space, and we just consider the nearest model. If the strings of the model name contain the strings of the object to be recognized, it can be considered that we have recognized the general object; otherwise, we compute another cluster again and perform the above steps. The experiments showed that we had achieved the real-time recognition, and ensured the speed and accuracy for the gor.", "title": "" }, { "docid": "7330b8af3f4b78c5965b2e847586d837", "text": "Bipolar disorder is characterized by recurrent manic and depressive episodes. Patients suffering from this disorder experience dramatic mood swings with a wide variety of typical behavioral facets, affecting overall activity, energy, sexual behavior, sense of self, self-esteem, circadian rhythm, cognition, and increased risk for suicide. Effective treatment options are limited and diagnosis can be complicated. To overcome these obstacles, a better understanding of the neurobiology underlying bipolar disorder is needed. Animal models can be useful tools in understanding brain mechanisms associated with certain behavior. The following review discusses several pathological aspects of humans suffering from bipolar disorder and compares these findings with insights obtained from several animal models mimicking diverse facets of its symptomatology. Various sections of the review concentrate on specific topics that are relevant in human patients, namely circadian rhythms, neurotransmitters, focusing on the dopaminergic system, stressful environment, and the immune system. We then explain how these areas have been manipulated to create animal models for the disorder. Even though several approaches have been conducted, there is still a lack of adequate animal models for bipolar disorder. Specifically, most animal models mimic only mania or depression and only a few include the cyclical nature of the human condition. Future studies could therefore focus on modeling both episodes in the same animal model to also have the possibility to investigate the switch from mania-like behavior to depressive-like behavior and vice versa. The use of viral tools and a focus on circadian rhythms and the immune system might make the creation of such animal models possible.", "title": "" }, { "docid": "a4a2f60248085008a91e8c5f5d99ef36", "text": "In process mining, precision measures are used to quantify how much a process model overapproximates the behavior seen in an event log. Although several measures have been proposed throughout the years, no research has been done to validate whether these measures achieve the intended aim of quantifying over-approximation in a consistent way for all models and logs. This paper fills this gap by postulating a number of axioms for quantifying precision consistently for any log and any model. Further, we show through counter-examples that none of the existing measures consistently quantifies precision.", "title": "" }, { "docid": "0297af005c837e410272ab3152942f90", "text": "Iris authentication is a popular method where persons are accurately authenticated. During authentication phase the features are extracted which are unique. Iris authentication uses IR images for authentication. This proposed work uses color iris images for authentication. Experiments are performed using ten different color models. This paper is focused on performance evaluation of color models used for color iris authentication. This proposed method is more reliable which cope up with different noises of color iris images. The experiments reveals the best selection of color model used for iris authentication. The proposed method is validated on UBIRIS noisy iris database. The results demonstrate that the accuracy is 92.1%, equal error rate of 0.072 and computational time is 0.039 seconds.", "title": "" }, { "docid": "73d58bbe0550fb58efc49ae5f84a1c7b", "text": "In this study, we will present the novel application of Type-2 (T2) fuzzy control into the popular video game called flappy bird. To the best of our knowledge, our work is the first deployment of the T2 fuzzy control into the computer games research area. We will propose a novel T2 fuzzified flappy bird control system that transforms the obstacle avoidance problem of the game logic into the reference tracking control problem. The presented T2 fuzzy control structure is composed of two important blocks which are the reference generator and Single Input Interval T2 Fuzzy Logic Controller (SIT2-FLC). The reference generator is the mechanism which uses the bird's position and the pipes' positions to generate an appropriate reference signal to be tracked. Thus, a conventional fuzzy feedback control system can be defined. The generated reference signal is tracked via the presented SIT2-FLC that can be easily tuned while also provides a certain degree of robustness to system. We will investigate the performance of the proposed T2 fuzzified flappy bird control system by providing comparative simulation results and also experimental results performed in the game environment. It will be shown that the proposed T2 fuzzified flappy bird control system results with a satisfactory performance both in the framework of fuzzy control and computer games. We believe that this first attempt of the employment of T2-FLCs in games will be an important step for a wider deployment of T2-FLCs in the research area of computer games.", "title": "" }, { "docid": "0320ebc09663ecd6bf5c39db472fcbde", "text": "The human visual system is capable of learning an unbounded number of facts from images including not only objects but also their attributes, actions and interactions. Such uniform understanding of visual facts has not received enough attention. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand an unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., <boy>), (2) attributes (e.g., <boy, tall>), (3) actions (e.g., <boy, playing>), and (4) interactions (e.g., <boy, riding, a horse >). Each fact has a language view (e.g., < boy, playing>) and a visual view (an image). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce a structured embedding model. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of > 202,000 facts and 814,000 images. Our results show the advantage of relating facts by the structure by the proposed model compared to the baselines.", "title": "" }, { "docid": "36e238fa3c85b41a062d08fd9844c9be", "text": "Building generalization is a difficult operation due to the complexity of the spatial distribution of buildings and for reasons of spatial recognition. In this study, building generalization is decomposed into two steps, i.e. building grouping and generalization execution. The neighbourhood model in urban morphology provides global constraints for guiding the global partitioning of building sets on the whole map by means of roads and rivers, by which enclaves, blocks, superblocks or neighbourhoods are formed; whereas the local constraints from Gestalt principles provide criteria for the further grouping of enclaves, blocks, superblocks and/or neighbourhoods. In the grouping process, graph theory, Delaunay triangulation and the Voronoi diagram are employed as supporting techniques. After grouping, some useful information, such as the sum of the building’s area, the mean separation and the standard deviation of the separation of buildings, is attached to each group. By means of the attached information, an appropriate operation is selected to generalize the corresponding groups. Indeed, the methodology described brings together a number of welldeveloped theories/techniques, including graph theory, Delaunay triangulation, the Voronoi diagram, urban morphology and Gestalt theory, in such a way that multiscale products can be derived.", "title": "" }, { "docid": "42d3f666325c3c9e2d61fcbad3c6659a", "text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.", "title": "" }, { "docid": "615a24719fe4300ea8971e86014ed8fe", "text": "This paper presents a new code for the analysis of gamma spectra generated by an equipment for continuous measurement of gamma radioactivity in aerosols with paper filter. It is called pGamma and has been developed by the Nuclear Engineering Research Group at the Technical University of Catalonia - Barcelona Tech and by Raditel Serveis i Subministraments Tecnològics, Ltd. The code has been developed to identify the gamma emitters and to determine their activity concentration. It generates alarms depending on the activity of the emitters and elaborates reports. Therefore it includes a library with NORM and artificial emitters of interest. The code is being adapted to the monitors of the Environmental Radiological Surveillance Network of the local Catalan Government in Spain (Generalitat de Catalunya) and is used at three stations of the Network.", "title": "" } ]
scidocsrr
5e9bb96b6522e7a9884b54037171dc6e
Digital Maturity in Traditional industries - an Exploratory Analysis
[ { "docid": "ddffafc22209fc71c6c572dea0ddfca4", "text": "In the context of an ongoing digital transformation, companies across all industries are confronted with the challenge to exploit IT-induced business opportunities and to simultaneously avert IT-induced business risks. Due to this development, questions about a company’s overall status with regard to its digital transformation become more and more relevant. In recent years, an unclear number of maturity models was established in order to address these kind of questions by assessing a company’s digital maturity. Purpose of this Report is to show the large range of digital maturity models and to evaluate overall potential for approximating a company’s digital transformation status.", "title": "" } ]
[ { "docid": "06f94060645ff4a251ebdf9ac2687bca", "text": "This paper proposes a terminal sliding-mode (TSM) observer for estimating the immeasurable mechanical parameters of permanent-magnet synchronous motors (PMSMs) used for complex mechanical systems. The observer can track the system states in finite time with high steady-state precision. A TSM control strategy is designed to guarantee the global finite-time stability of the observer and, meanwhile, to estimate the mechanical parameters of the PMSM. A novel second-order sliding-mode algorithm is designed to soften the switching control signal of the observer. The effect of the equivalent low-pass filter can be properly controlled in the algorithm based on requirements. The smooth signal of the TSM observer is directly used for the parameter estimation. The experimental results in a practical CNC machine tool are provided to demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "bee18c0e11ec5db199861ef74b06bfe1", "text": "Financial time series are complex, non-stationary and deterministically chaotic. Technical indicators are used with principal component analysis (PCA) in order to identify the most influential inputs in the context of the forecasting model. Neural networks (NN) and support vector regression (SVR) are used with different inputs. Our assumption is that the future value of a stock price depends on the financial indicators although there is no parametric model to explain this relationship. This relationship comes from technical analysis. Comparison shows that SVR and MLP networks require different inputs. The MLP networks outperform the SVR technique.", "title": "" }, { "docid": "bd1fdbfcc0116dcdc5114065f32a883e", "text": "Thousands of operations are annually guided with computer assisted surgery (CAS) technologies. As the use of these devices is rapidly increasing, the reliability of the devices becomes ever more critical. The problem of accuracy assessment of the devices has thus become relevant. During the past five years, over 200 hazardous situations have been documented in the MAUDE database during operations using these devices in the field of neurosurgery alone. Had the accuracy of these devices been periodically assessed pre-operatively, many of them might have been prevented. The technical accuracy of a commercial navigator enabling the use of both optical (OTS) and electromagnetic (EMTS) tracking systems was assessed in the hospital setting using accuracy assessment tools and methods developed by the authors of this paper. The technical accuracy was obtained by comparing the positions of the navigated tool tip with the phantom accuracy assessment points. Each assessment contained a total of 51 points and a region of surgical interest (ROSI) volume of 120x120x100 mm roughly mimicking the size of the human head. The error analysis provided a comprehensive understanding of the trend of accuracy of the surgical navigator modalities. This study showed that the technical accuracies of OTS and EMTS over the pre-determined ROSI were nearly equal. However, the placement of the particular modality hardware needs to be optimized for the surgical procedure. New applications of EMTS, which does not require rigid immobilization of the surgical area, are suggested.", "title": "" }, { "docid": "4b0df92d1e18a47a9636a394a369a657", "text": "OBJECTIVE\nTo compare growth rates of ovarian follicles during natural menstrual cycles, oral contraception (OC) cycles, and ovarian stimulation cycles using standardized techniques.\n\n\nDESIGN\nProspective, comparative, observational, longitudinal study.\n\n\nSETTING\nHealthy volunteers in research trials and infertility patients undergoing treatment at an academic institution.\n\n\nPATIENT(S)\nWomen were evaluated during natural cycles (n = 50), OC cycles (n = 71), and ovarian stimulation cycles (n = 131).\n\n\nINTERVENTION(S)\nSerial transvaginal ultrasonography was performed to measure follicle diameter. Day-to-day growth and regression profiles of individual follicles were determined. Mean growth rates were calculated for ovulatory follicles. Mean growth and regression rates were calculated for anovulatory follicles.\n\n\nMAIN OUTCOME MEASURE(S)\nFollicle growth rate (in millimeters per day).\n\n\nRESULT(S)\nMean follicular growth rate was greater during ovarian stimulation cycles (1.69 +/- 0.03 mm/day) compared to natural (1.42 +/- 0.05 mm/day) and OC cycles (1.36 +/- 0.08 mm/day). The interval from dominant follicle selection to ovulation was shorter during stimulation cycles (5.08 +/- 0.07 days) compared to natural cycles (7.16 +/- 0.23 days).\n\n\nCONCLUSION(S)\nFollicles grew faster during ovarian stimulation therapy compared to natural cycles or OC cycles. Greater follicular growth rates in stimulation cycles were associated with shorter intervals from selection to ovulation. The biologic effects of increased follicular growth rates and shorter intervals to ovulation on oocyte competence in women undergoing assisted reproduction remain to be determined.", "title": "" }, { "docid": "0904545d069ac10ff9783cd9647d4066", "text": "Technological advances are taking a major role in every field of our life. Today, younger generation is more attached to technology, immerging it mostly for social purposes. Therefore, the importance of its existence cannot be ignored. For that, it is the time for every mentor to apply technology to education. Instructors from different majors need to realize that integrating technology into education is a powerful tool that helps them moderate their course, but never a replacement to their existence. This paper’s interest is to deliver a personal experience to other instructors on how to correctly use technology for educational purposes. One way of clarifying this point is to shed light on a very common social application that is WhatsApp. It is a social application available on every smartphone that is usually used as a social medium among users from different generation. This paper used WhatsApp as an application that can associate technology with learning and teachers’ moderation and collaboration under one roof, and that is by applying Mobile learning. One main question to rise at this point is whether students are going to be collaborative or not with their teacher in applying technology into education. There will be an anticipated approach from this paper on both Mobile learning and WhatsApp, that is to reach an agreement that Mobile learning is essential and adds value to the educational material we have in hand. Great examples from my own data are going to be presented to encourage others to predict new ways that can be added to my effort and others as well. The result hoped for after this paper is to be able to answer any digital immigrants’ questions and help them to be more confident with technology.", "title": "" }, { "docid": "7a4bb28ae7c175a018b278653e32c3a1", "text": "Additive manufacturing (AM) alias 3D printing translates computer-aided design (CAD) virtual 3D models into physical objects. By digital slicing of CAD, 3D scan, or tomography data, AM builds objects layer by layer without the need for molds or machining. AM enables decentralized fabrication of customized objects on demand by exploiting digital information storage and retrieval via the Internet. The ongoing transition from rapid prototyping to rapid manufacturing prompts new challenges for mechanical engineers and materials scientists alike. Because polymers are by far the most utilized class of materials for AM, this Review focuses on polymer processing and the development of polymers and advanced polymer systems specifically for AM. AM techniques covered include vat photopolymerization (stereolithography), powder bed fusion (SLS), material and binder jetting (inkjet and aerosol 3D printing), sheet lamination (LOM), extrusion (FDM, 3D dispensing, 3D fiber deposition, and 3D plotting), and 3D bioprinting. The range of polymers used in AM encompasses thermoplastics, thermosets, elastomers, hydrogels, functional polymers, polymer blends, composites, and biological systems. Aspects of polymer design, additives, and processing parameters as they relate to enhancing build speed and improving accuracy, functionality, surface finish, stability, mechanical properties, and porosity are addressed. Selected applications demonstrate how polymer-based AM is being exploited in lightweight engineering, architecture, food processing, optics, energy technology, dentistry, drug delivery, and personalized medicine. Unparalleled by metals and ceramics, polymer-based AM plays a key role in the emerging AM of advanced multifunctional and multimaterial systems including living biological systems as well as life-like synthetic systems.", "title": "" }, { "docid": "426c61637ea724f81b2f1f1b63094095", "text": "Cancer is the general name for a group of more than 100 diseases. Although cancer includes different types of diseases, they all start because abnormal cells grow out of control. Without treatment, cancer can cause serious health problems and even loss of life. Early detection of cancer may reduce mortality and morbidity. This paper presents a review of the detection methods for lung, breast, and brain cancers. These methods used for diagnosis include artificial intelligence techniques, such as support vector machine neural network, artificial neural network, fuzzy logic, and adaptive neuro-fuzzy inference system, with medical imaging like X-ray, ultrasound, magnetic resonance imaging, and computed tomography scan images. Imaging techniques are the most important approach for precise diagnosis of human cancer. We investigated all these techniques to identify a method that can provide superior accuracy and determine the best medical images for use in each type of cancer.", "title": "" }, { "docid": "a75c0cb773d4123d6f6dbf610fff24cf", "text": "Optimization of parameterized policies for reinforcement learning (RL) is an important and challenging problem in artificial intelligence. Among the most common approaches are algorithms based on gradient ascent of a score function representing discounted return. In this paper, we examine the role of these policy gradient and actor-critic algorithms in partially-observable multiagent environments. We show several candidate policy update rules and relate them to a foundation of regret minimization and multiagent learning techniques for the one-shot and tabular cases, leading to previously unknown convergence guarantees. We apply our method to model-free multiagent reinforcement learning in adversarial sequential decision problems (zero-sum imperfect information games), using RL-style function approximation. We evaluate on commonly used benchmark Poker domains, showing performance against fixed policies and empirical convergence to approximate Nash equilibria in self-play with rates similar to or better than a baseline model-free algorithm for zero-sum games, without any domain-specific state space reductions.", "title": "" }, { "docid": "68ecec113e3a5376abcac69bb853e38e", "text": "The term “outlier” can generally be defined as an observation that is significantly different from the other values in a data set. The outliers may be instances of error or indicate events. The task of outlier detection aims at identifying such outliers in order to improve the analysis of data and further discover interesting and useful knowledge about unusual events within numerous applications domains. In this paper, we report on contemporary unsupervised outlier detection techniques for multiple types of data sets and provide a comprehensive taxonomy framework and two decision trees to select the most suitable technique based on data set. Furthermore, we highlight the advantages, disadvantages and performance issues of each class of outlier detection techniques under this taxonomy framework.", "title": "" }, { "docid": "098811fda39d0ee370f69045f355cebe", "text": "Oil theft always results in huge economic loss, human casualties, and extremely environmental pollution especially when the leaks from crude oil pipeline are not detected and repaired timely. In this paper, we focus on how to detect and monitor abnormal noise and vibration beforehand or in real time by the Internet of Things (IoT). Firstly, the diversities of crude oil theft and the difficulties of oil anti-theft are analyzed in China, and the requirement analysis of the IoT application is stated. Secondly, the intelligent antitheft system based on the IoT is planned and designed for crude oil transportation by tank trucks and by oil pipelines according to the current situation in China. Thirdly, the problems of anti-theft system implementation are discussed, and the suggestions and advice are put forward to ensure that the system can be implemented successfully. The intelligent anti-theft system application can not only stop oil theft timely, but also prevent oil mice from stealing crude oil beforehand. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of KES International.", "title": "" }, { "docid": "50990a1cfff001036cf58046c2923183", "text": "Omni-directional video (ODV) is a novel medium that offers viewers a 360º panoramic recording. This type of content will become more common within our living rooms in the near future, seeing that immersive displaying technologies such as 3D television are on the rise. However, little attention has been given to how to interact with ODV content. We present a gesture elicitation study in which we asked users to perform mid-air gestures that they consider to be appropriate for ODV interaction, both for individual as well as collocated settings. We are interested in the gesture variations and adaptations that come forth from individual and collocated usage. To this end, we gathered quantitative and qualitative data by means of observations, motion capture, questionnaires and interviews. This data resulted in a user-defined gesture set for ODV, alongside an in-depth analysis of the variation in gestures we observed during the study.", "title": "" }, { "docid": "a383d9b392a58f6ba8a7192104e99600", "text": "In this correspondence, we present a new universal entropy estimator for stationary ergodic sources, prove almost sure convergence, and establish an upper bound on the convergence rate for finite-alphabet finite memory sources. The algorithm is motivated by data compression using the Burrows-Wheeler block sorting transform (BWT). By exploiting the property that the BWT output sequence is close to a piecewise stationary memoryless source, we can segment the output sequence and estimate probabilities in each segment. Experimental results show that our algorithm outperforms Lempel-Ziv (LZ) string-matching-based algorithms.", "title": "" }, { "docid": "bd3620816c83fae9b4a5c871927f2b73", "text": "Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy. Using a deep learning approach to track user-defined body parts during various behaviors across multiple species, the authors show that their toolbox, called DeepLabCut, can achieve human accuracy with only a few hundred frames of training data.", "title": "" }, { "docid": "741619d65757e07394a161f4b96ec408", "text": "Self-disclosure plays a central role in the development and maintenance of relationships. One way that researchers have explored these processes is by studying the links between self-disclosure and liking. Using meta-analytic procedures, the present work sought to clarify and review this literature by evaluating the evidence for 3 distinct disclosure-liking effects. Significant disclosure-liking relations were found for each effect: (a) People who engage in intimate disclosures tend to be liked more than people who disclose at lower levels, (b) people disclose more to those whom they initially like, and (c) people like others as a result of having disclosed to them. In addition, the relation between disclosure and liking was moderated by a number of variables, including study paradigm, type of disclosure, and gender of the discloser. Taken together, these results suggest that various disclosure-liking effects can be integrated and viewed as operating together within a dynamic interpersonal system. Implications for theory development are discussed, and avenues for future research are suggested.", "title": "" }, { "docid": "c6005a99e6a60a4ee5f958521dcad4d3", "text": "We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at capturing our robot’s “body energy” during real time operation as a means of quantifying its potential for agile behavior. Finally, we present joint motor, inertial and motion capture data taken from Canid’s initial leaps into highly energetic regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform for developing efficient, stable, hence useful bounding gaits. For more information: Kod*Lab Disciplines Electrical and Computer Engineering | Engineering | Systems Engineering Comments BibTeX entry @article{canid_spie_2013, author = {Pusey, Jason L. and Duperret, Jeffrey M. and Haynes, G. Clark and Knopf, Ryan and Koditschek , Daniel E.}, title = {Free-Standing Leaping Experiments with a PowerAutonomous, Elastic-Spined Quadruped}, pages = {87410W-87410W-15}, year = {2013}, doi = {10.1117/ 12.2016073} } This work is supported by the National Science Foundation Graduate Research Fellowship under Grant Number DGE-0822, and by the Army Research Laboratory under Cooperative Agreement Number W911NF-10–2−0016. Copyright 2013 Society of Photo-Optical Instrumentation Engineers. Postprint version. This paper was (will be) published in Proceedings of the SPIE Defense, Security, and Sensing Conference, Unmanned Systems Technology XV (8741), and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/ese_papers/655 Free-Standing Leaping Experiments with a Power-Autonomous, Elastic-Spined Quadruped Jason L. Pusey a , Jeffrey M. Duperret b , G. Clark Haynes c , Ryan Knopf b , and Daniel E. Koditschek b a U.S. Army Research Laboratory, Aberdeen Proving Ground, MD, b University of Pennsylvania, Philadelphia, PA, c National Robotics Engineering Center, Carnegie Mellon University, Pittsburgh, PA", "title": "" }, { "docid": "2b677a052846d4f52f7b6a1eac94114d", "text": "This paper presents a unifying view of messagepassing algorithms, as methods to approximate a complex Bayesian network by a simpler network with minimum information divergence. In this view, the difference between mean-field methods and belief propagation is not the amount of structure they model, but only the measure of loss they minimize (‘exclusive’ versus ‘inclusive’ Kullback-Leibler divergence). In each case, message-passing arises by minimizing a localized version of the divergence, local to each factor. By examining these divergence measures, we can intuit the types of solution they prefer (symmetry-breaking, for example) and their suitability for different tasks. Furthermore, by considering a wider variety of divergence measures (such as alpha-divergences), we can achieve different complexity and performance goals.", "title": "" }, { "docid": "4d7b4fe86b906baae887c80e872d71a4", "text": "The use of serologic testing and its value in the diagnosis of Lyme disease remain confusing and controversial for physicians, especially concerning persons who are at low risk for the disease. The approach to diagnosing Lyme disease varies depending on the probability of disease (based on endemicity and clinical findings) and the stage at which the disease may be. In patients from endemic areas, Lyme disease may be diagnosed on clinical grounds alone in the presence of erythema migrans. These patients do not require serologic testing, although it may be considered according to patient preference. When the pretest probability is moderate (e.g., in a patient from a highly or moderately endemic area who has advanced manifestations of Lyme disease), serologic testing should be performed with the complete two-step approach in which a positive or equivocal serology is followed by a more specific Western blot test. Samples drawn from patients within four weeks of disease onset are tested by Western blot technique for both immunoglobulin M and immunoglobulin G antibodies; samples drawn more than four weeks after disease onset are tested for immunoglobulin G only. Patients who show no objective signs of Lyme disease have a low probability of the disease, and serologic testing in this group should be kept to a minimum because of the high risk of false-positive results. When unexplained nonspecific systemic symptoms such as myalgia, fatigue, and paresthesias have persisted for a long time in a person from an endemic area, serologic testing should be performed with the complete two-step approach described above.", "title": "" }, { "docid": "288362498806eec599ff92cc62556d8d", "text": "Recently, algorithms for object recognition and related tasks have become sufficiently proficient that new vision tasks can now be pursued. In this paper, we build a system capable of answering open-ended text-based questions about images, which is known as Visual Question Answering (VQA). Our approach's key insight is that we can predict the form of the answer from the question. We formulate our solution in a Bayesian framework. When our approach is combined with a discriminative model, the combined model achieves state-of-the-art results on four benchmark datasets for open-ended VQA: DAQUAR, COCO-QA, The VQA Dataset, and Visual7W.", "title": "" }, { "docid": "c61c111c5b5d1c4663905371b638e703", "text": "Many standard computer vision datasets exhibit biases due to a variety of sources including illumination condition, imaging system, and preference of dataset collectors. Biases like these can have downstream effects in the use of vision datasets in the construction of generalizable techniques, especially for the goal of the creation of a classification system capable of generalizing to unseen and novel datasets. In this work we propose Unbiased Metric Learning (UML), a metric learning approach, to achieve this goal. UML operates in the following two steps: (1) By varying hyper parameters, it learns a set of less biased candidate distance metrics on training examples from multiple biased datasets. The key idea is to learn a neighborhood for each example, which consists of not only examples of the same category from the same dataset, but those from other datasets. The learning framework is based on structural SVM. (2) We do model validation on a set of weakly-labeled web images retrieved by issuing class labels as keywords to search engine. The metric with best validation performance is selected. Although the web images sometimes have noisy labels, they often tend to be less biased, which makes them suitable for the validation set in our task. Cross-dataset image classification experiments are carried out. Results show significant performance improvement on four well-known computer vision datasets.", "title": "" }, { "docid": "873be467576bff16904d7abc6c961394", "text": "A bunny ear shaped combline element for dual-polarized compact aperture arrays is presented which provides relatively low noise temperature and low level cross polarization over a wide bandwidth and wide scanning angles. The element is corrugated along the outer edges between the elements to control the complex mutual coupling at high scan angles. This produces nearly linear polarized waves in the principle planes and lower than -10 dB cross polarization in the intercardinal plane. To achieve a low noise temperature, only metal conductors are used, which also results in a low cost of manufacture. Dual linear polarization or circular polarization can be realized by adopting two different arrangements of the orthogonal elements. The performances for both mechanical arrangements are investigated. The robustness of the new design over the conventional Vivaldi-type antennas is highlighted.", "title": "" } ]
scidocsrr
15efbf0f333f9a532cb1deda4dfaa8bd
Hybreed: A software framework for developing context-aware hybrid recommender systems
[ { "docid": "b7bb7e480400d6a58d5d5f1795219234", "text": "This paper introduces a method for giving recommendations of tourist activities to a group of users. This method makes recommendations based on the group tastes, their demographic classification and the places visited by the users in former trips. The group recommendation is computed from individual personal recommendations through the use of techniques such as aggregation, intersection or incremental intersection. This method is implemented as an extension of the e-Tourism tool, which is a user-adapted tourism and leisure application, whose main component is the Generalist Recommender System Kernel (GRSK), a domain-independent taxonomy-driven search engine that manages the group recommendation.", "title": "" }, { "docid": "cc2a7d6ac63f12b29a6d30f20b5547be", "text": "The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk ystem in a desktop setting and are currently using it to build an intelligent home nvironment.", "title": "" } ]
[ { "docid": "e51f7fde238b0896df22d196b8c59c1a", "text": "The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions such as the grey-world and white patch assumptions. In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions found in images. To this end, images are first classified into stages (rough 3D geometry models). According to the stage models, images are divided into different regions using hard and soft segmentation. After that, the best color constancy algorithm is selected for each geometry segment. As a result, light source estimation is tuned to the global scene geometry. Our algorithm opens the possibility to estimate the remote scene illumination color, by distinguishing nearby light source from distant illuminants. Experiments on large scale image datasets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 14% of median angular error. When using an ideal classifier (i.e, all of the test images are correctly classified into stages), the performance of the proposed method achieves an improvement of 31% of median angular error compared to the best-performing single color constancy algorithm.", "title": "" }, { "docid": "afe24ba1c3f3423719a98e1a69a3dc70", "text": "This brief presents a nonisolated multilevel linear amplifier with nonlinear component (LINC) power amplifier (PA) implemented in a standard 0.18-μm complementary metal-oxide- semiconductor process. Using a nonisolated power combiner, the overall power efficiency is increased by reducing the wasted power at the combined out-phased signal; however, the efficiency at low power still needs to be improved. To further improve the efficiency of the low-power (LP) mode, we propose a multiple-output power-level LINC PA, with load modulation implemented by switches. In addition, analysis of the proposed design on the system level as well as the circuit level was performed to optimize its performance. The measurement results demonstrate that the proposed technique maintains more than 45% power-added efficiency (PAE) for peak power at 21 dB for the high-power mode and 17 dBm for the LP mode at 600 MHz. The PAE for a 6-dB peak-to-average ratio orthogonal frequency-division multiplexing modulated signal is higher than 24% PAE in both power modes. To the authors' knowledge, the proposed output-phasing PA is the first implemented multilevel LINC PA that uses quarter-wave lines without multiple power supply sources.", "title": "" }, { "docid": "d183be50b6cb55cbf42bc273b7e2e957", "text": "THE FUNCTIONAL MOVEMENT SCREEN (FMS) IS A PREPARTICIPATION SCREENING TOOL COMPRISING 7 INDIVIDUAL TESTS FOR WHICH BOTH INDIVIDUAL SCORES AND AN OVERALL SCORE ARE GIVEN. THE FMS DISPLAYS BOTH INTERRATER AND INTRARATER RELIABILITY BUT HAS BEEN CHALLENGED ON THE BASIS OF A LACK OF VALIDITY IN SEVERAL RESPECTS. THE FMS SEEMS TO HAVE SOME DEGREE OF PREDICTIVE ABILITY FOR IDENTIFYING ATHLETES WHO ARE AT AN INCREASED RISK OF INJURY. HOWEVER, A POOR SCORE ON THE FMS DOES NOT PRECLUDE ATHLETES FROM COMPETING AT THE HIGHEST LEVEL NOR DOES IT DIFFERENTIATE BETWEEN ATHLETES OF DIFFERING ABILITIES. T he functional movement screen (FMS) is a pre-participation screening tool comprising 7 individual tests for which both individual scores and an overall score are given (11). The 7 tests are rated from 0 to 3 by an examiner and include the deep squat, hurdle step, in-line lunge, shoulder mobility, active straight leg raise, trunk stability push-up, and rotary stability (11,12). The score of 0 is given if pain occurs during a test, the score of 1 is given if the subject is not able to perform the movement, the score of 2 is given if the subject is able to complete the movement but compensates in some way, and the score of 3 is given if the subject performs the movement correctly (11). It has been suggested that a less-thanperfect score on a single individual test of the FMS reveals a “compensatory movement pattern.” Such compensatory movement patterns have been proposed to lead to athletes “sacrificing efficient movements for inefficient ones” (11), which implies the replacement of either amore economical ormore effective pattern with a less economical or less effective one. It has also been proposed that such compensatory movement patterns predispose an athlete to injury and reduced performance and may be corrected by performing specific exercises. As a designer of the FMS states: “an athlete who is unable to perform amovement correctly. has uncovered a significant piece of information that may be the key to reducing the risk of chronic injuries, improving overall sport performance, and developing a training or rehabilitation program .” (9). This seems to imply that the FMS is put forward as a valid test for identifying certain movement patterns that lead to greater injury risk and reduced athletic performance. In the course of our review, we did not identify a formal definition of the concept “compensatory movement pattern.” We suggest that it can be defined as a kinematic feature or sequence of features observed during the performance of a movement that deviate from a template that is thought to represent the least injurious way of performing the movement. In the FMS, the individual scores for each movement are combined into a final score out of 21 total possible points. It has been suggested that lower overall scores predict individuals who are at a greater risk of injury than those with higher scores (11). In practice, researchers have generally identified 14 points as the ideal cut-off point for those at greater or less risk of injury (5,8,36,38,41,43). The cut-off value of 14 points was in certain studies identified by means of a statistical method known as a receiver-operator characteristic (ROC) curve (5,38,43). This technique allows researchers to identify the numerical score that maximizes the correct prediction of injury classification (66). However, in other cases (8,36), the researchers simply adopted the cut-off value of 14 points based on the findings of previous studies. Although this may not maximize the predictability of the cut-off point in those individual studies that elected not to use a ROC curve, it does have the advantage of enhancing comparability between trials. Studies investigating the norms for FMS overall scores have identified that the normal FMS score in healthy but untrained populations ranges from 14.14 6 2.85 points (51) to 15.7 6 1.9 points (53). This suggests that most untrained people are slightly above the cut-off score of #14 points, which is thought to be indicative of prevalent compensation patterns and which is also believed to be predictive of increased risk of injury and reduced performance.", "title": "" }, { "docid": "defbecacc15af7684a6f9722349f42e3", "text": "We present a novel, unsupervised, and distance measure agnostic method for search space reduction in spell correction using neural character embeddings. The embeddings are learned by skip-gram word2vec training on sequences generated from dictionary words in a phonetic informationretentive manner. We report a very high performance in terms of both success rates and reduction of search space on the Birkbeck spelling error corpus. To the best of our knowledge, this is the first application of word2vec to spell correction.", "title": "" }, { "docid": "cb9d7c9cdd4bc90d08f2d22ad6931e66", "text": "We present a topologically robust algorithm for Boolean operations on polyhedral boundary models. The algorithm can be proved always to generate a result with valid connectivity if the input shape representations have valid connectivity, irrespective of the type of arithmetic used or the extent of numerical errors in the computations or input data. The main part of the algorithm is based on a series of interdependent operations. The relationship between these operations ensures a consistency in the intermediate results that guarantees correct connectivity in the final result. Either a triangle mesh or polygon mesh can be used. Although the basic algorithm may generate geometric artifacts, principally gaps and slivers, a data smoothing post-process can be applied to the result to remove such artifacts, thereby making the combined process a practical and reliable way of performing Boolean operations. c © 2006 Published by Elsevier Ltd", "title": "" }, { "docid": "aa9bfea9c679cfef5c3ad6d810873578", "text": "-The paper deals with moment invariants, which are invariant under general affine transformation and may be used for recognition of affine-deformed objects. Our approach is based on the theory of algebraic invariants. The invariants from secondand third-order moments are derived and shown to be complete. The paper is a significant extension and generalization of recent works. Several numerical experiments dealing with pattern recognition by means of the affine moment invariants as the features are described. Feature extraction Affine transform Algebraic invariants Moment invariants Pattern recognition Image matching I. I N T R O D U C T I O N A feature-based recognition of objects or patterns independent of their position, size, orientation and other variations has been the goal of much recent research. Finding efficient invariant features is the key to solving this problem. There have been several kinds of features used for recognition. These may be divided into four groups as follows: (1) visual features (edges, textures and contours); (2) transform coefficient features (Fourier descriptors, ~''2~ Hadamard coefficients; ~3~ (3) algebraic features (based on matrix decomposition of image, see reference (4) for details); and (4) statistical features (moment invariants). In this paper, attention is paid to statistical features. Moment invariants are very useful tools for pattern recognition. They were derived by Hu tsl and they were successfully used in aircraft identification, ~61 remotely sensed data matching ~7~ and character recognition. ~s~ Further studies were made by Maitra ~m and Hsia \"°~ in order to reach higher reliability. Several effective algorithms for fast computat ion of moment invariants were recently described in references (11-13). All the above-mentioned features are invariant only under translation, rotation and scaling of the object. In this paper, our aim is to find features which are invariant under general affine transformations and which may be used for recognition of affine-deformed objects. Our approach is based on the theory of algebraic invariants. \"4~ The first attempt to find affine invariants in this way was made by Hu, ~s~ but his affine moment invariants were derived incorrectly. Several correct affine moment invariants are derived in Section 2, and their use for object recognition and scene matching is experimentally proved in Section 3. 2. A F F I N E M O M E N T INVARIANTS The affine moment invariants are derived by means of the theory of algebraic invariants. They are invariant under general affine transformation u = a 0 + a ] x + a2), v = bo + b l x + b2y. (1) The general two-dimensional (p + q)th order moments of a density distribution function p ( x , y ) are defined as: mpq = fS xPyqP(X'y) dx dy p,q = 0, !, 2 . . . . (2) For simplicity we deal only with binary objects in this paper, then p is a characteristic function of object G, and mpq=S~xPk ,qdxd) , , p,q =0 , 1,2 . . . . 13) G It is possible to generalize all the following relations and results for grey-level objects. The affine transformation (1) can be decomposed into six one-parameter transformations: 1. u = x + 7 2. u = x 3. u = ~ ' x v = y v = y + / ~ v = co.y 4. u = f i ' x 5. u = x + t ' y 6. u = x v = y t , = y v = t \" x + y. Any function F of moments which is invariant under these six transformations will be invariant under the general affine transformation (1). From the requirement of invariantness under these transformations we can derive the type and parameters of the function F. If we use central moments instead of general moments (2) or (3), any function of them will be invariant", "title": "" }, { "docid": "8674128201d80772040446f1ab6a7cd1", "text": "In this paper, we present an attribute graph grammar for image parsing on scenes with man-made objects, such as buildings, hallways, kitchens, and living moms. We choose one class of primitives - 3D planar rectangles projected on images and six graph grammar production rules. Each production rule not only expands a node into its components, but also includes a number of equations that constrain the attributes of a parent node and those of its children. Thus our graph grammar is context sensitive. The grammar rules are used recursively to produce a large number of objects and patterns in images and thus the whole graph grammar is a type of generative model. The inference algorithm integrates bottom-up rectangle detection which activates top-down prediction using the grammar rules. The final results are validated in a Bayesian framework. The output of the inference is a hierarchical parsing graph with objects, surfaces, rectangles, and their spatial relations. In the inference, the acceptance of a grammar rule means recognition of an object, and actions are taken to pass the attributes between a node and its parent through the constraint equations associated with this production rule. When an attribute is passed from a child node to a parent node, it is called bottom-up, and the opposite is called top-down", "title": "" }, { "docid": "6dbabfe7370b19c55a52671c82c3e3c8", "text": "The development of a compact circular polarization Orthomode Trasducer (OMT) working in two frequency bands with dual circular polarization (RHCP & LHCP) is presented. The device covers the complete communication spectrum allocated at C-band. At the same time, the device presents high power handling capability and very low mass and envelope size. The OMT plus a feed horn are used to illuminate a Reflector antenna, the surface of which is shaped to provide domestic or regional coverage from geostationary orbit. The full band operation increases the earth-satellite communication capability. The paper will show the OMT selected architecture, the RF performances at unit level and at component level. RF power aspects like multipaction and PIM are addressed. This development was performed under European Space Agency ESA ARTES-4 program.", "title": "" }, { "docid": "1ee33deb30b4ffae5ea16dc4ad2f93ff", "text": "Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification.", "title": "" }, { "docid": "93076fee7472e1a89b2b3eb93cff4737", "text": "This paper presents a fast and robust level set method for image segmentation. To enhance the robustness against noise, we embed a Markov random field (MRF) energy function to the conventional level set energy function. This MRF energy function builds the correlation of a pixel with its neighbors and encourages them to fall into the same region. To obtain a fast implementation of the MRF embedded level set model, we explore algebraic multigrid (AMG) and sparse field method (SFM) to increase the time step and decrease the computation domain, respectively. Both AMG and SFM can be conducted in a parallel fashion, which facilitates the processing of our method for big image databases. By comparing the proposed fast and robust level set method with the standard level set method and its popular variants on noisy synthetic images, synthetic aperture radar (SAR) images, medical images, and natural images, we comprehensively demonstrate the new method is robust against various kinds of noises. In particular, the new level set method can segment an image of size 500 × 500 within 3 s on MATLAB R2010b installed in a computer with 3.30-GHz CPU and 4-GB memory.", "title": "" }, { "docid": "a9f6c0dfd884fb22e039b37e98f22fe0", "text": "Image semantic segmentation is a fundamental problem and plays an important role in computer vision and artificial intelligence. Recent deep neural networks have improved the accuracy of semantic segmentation significantly. Meanwhile, the number of network parameters and floating point operations have also increased notably. The realworld applications not only have high requirements on the segmentation accuracy, but also demand real-time processing. In this paper, we propose a pyramid pooling encoder-decoder network named PPEDNet for both better accuracy and faster processing speed. Our encoder network is based on VGG16 and discards the fully connected layers due to their huge amounts of parameters. To extract context feature efficiently, we design a pyramid pooling architecture. The decoder is a trainable convolutional network for upsampling the output of the encoder, and finetuning the segmentation details. Our method is evaluated on CamVid dataset, achieving 7.214% mIOU accuracy improvement while reducing 17.9% of the parameters compared with the state-of-the-art algorithm.", "title": "" }, { "docid": "ec8f8f8611a4db6d70ba7913c3b80687", "text": "Identifying building footprints is a critical and challenging problem in many remote sensing applications. Solutions to this problem have been investigated using a variety of sensing modalities as input. In this work, we consider the detection of building footprints from 3D Digital Surface Models (DSMs) created from commercial satellite imagery along with RGB orthorectified imagery. Recent public challenges (SpaceNet 1 and 2, DSTL Satellite Imagery Feature Detection Challenge, and the ISPRS Test Project on Urban Classification) approach this problem using other sensing modalities or higher resolution data. As a result of these challenges and other work, most publically available automated methods for building footprint detection using 2D and 3D data sources as input are meant for high-resolution 3D lidar and 2D airborne imagery, or make use of multispectral imagery as well to aid detection. Performance is typically degraded as the fidelity and post spacing of the 3D lidar data or the 2D imagery is reduced. Furthermore, most software packages do not work well enough with this type of data to enable a fully automated solution. We describe a public benchmark dataset consisting of 50 cm DSMs created from commercial satellite imagery, as well as coincident 50 cm RGB orthorectified imagery products. The dataset includes ground truth building outlines and we propose representative quantitative metrics for evaluating performance. In addition, we provide lessons learned and hope to promote additional research in this field by releasing this public benchmark dataset to the community.", "title": "" }, { "docid": "ade88f8a9aa8a47dd2dc5153b3584695", "text": "A software environment is described which provides facilities at a variety of levels for “animating” algorithms: exposing properties of programs by displaying multiple dynamic views of the program and associated data structures. The system is operational on a network of graphics-based, personal workstations and has been used successfully in several applications for teaching and research in computer science and mathematics. In this paper, we outline the conceptual framework that we have developed for animating algorithms, describe the system that we have implemented, and give several examples drawn from the host of algorithms that we have animated.", "title": "" }, { "docid": "619165e7f74baf2a09271da789e724df", "text": "MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.", "title": "" }, { "docid": "616749e7918accb48e46a13d6d1a36c2", "text": "Achieving long battery lives or even self sustainability has been a long standing challenge for designing mobile devices. This paper presents a novel solution that seamlessly integrates two technologies, mobile cloud computing and microwave power transfer (MPT), to enable computation in passive low-complexity devices such as sensors and wearable computing devices. Specifically, considering a single-user system, a base station (BS) either transfers power to or offloads computation from a mobile to the cloud; the mobile uses harvested energy to compute given data either locally or by offloading. A framework for energy efficient computing is proposed that comprises a set of policies for controlling CPU cycles for the mode of local computing, time division between MPT and offloading for the other mode of offloading, and mode selection. Given the CPU-cycle statistics information and channel state information (CSI), the policies aim at maximizing the probability of successfully computing given data, called computing probability, under the energy harvesting and deadline constraints. The policy optimization is translated into the equivalent problems of minimizing the mobile energy consumption for local computing and maximizing the mobile energy savings for offloading which are solved using convex optimization theory. The structures of the resultant policies are characterized in closed form. Furthermore, given non-causal CSI, the said analytical framework is further developed to support computation load allocation over multiple channel realizations, which further increases the computing probability. Last, simulation demonstrates the feasibility of wirelessly powered mobile cloud computing and the gain of its optimal control.", "title": "" }, { "docid": "69a6cfb649c3ccb22f7a4467f24520f3", "text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.", "title": "" }, { "docid": "b0dad27f9ea4a4f53d9691073d925a81", "text": "We present the synthesis and the optical properties of a new type of two-dimensional heterostructure: core/crown CdSe/CdS nanoplatelets. They consist of CdSe nanoplatelets that are extended laterally with CdS. Both the CdSe core and the CdS crown dimensions can be controlled. Their thickness is controlled at the monolayer level. These novel nanoplatelet-based heterostructures have spectroscopic properties that can be similar to nanoplatelets or closer to quantum dots, depending on the CdSe core lateral size.", "title": "" }, { "docid": "702a5f672f0ec2e84149c72a7de559df", "text": "Vladimir L. Averbukh, Mihkail O. Bakhterev, Aleksandr Yu. Baydalin Institute of Mathematics and Mechanics, Inst. of Math. and Mech., Inst. of Math. and Mech. Dmitriy Yu. Gorbashevskiy, Damir R. Ismagilov, Alexey Yu. Kazantsev Inst. of Math. and Mech., Ural State Univercity, Inst. of Math. and Mech. Polina V. Nebogatikova, Anna V. Popova, Pavel A. Vasev Ural State Univercity, Ural State Univercity, Inst. of Math. and Mech. Russia (all)", "title": "" }, { "docid": "5d879bdbf7667fa8ad19c3bb86219880", "text": "The cellular concept applied in mobile communication systems enables significant increase of overall system capacity, but requires careful radio network planning and dimensioning. Wireless and mobile network operators typically rely on various commercial radio network planning and dimensioning tools, which incorporate different radio signal propagation models. In this paper we present the use of open-source Geographical Resources Analysis Support System (GRASS) for the calculation of radio signal coverage. We developed GRASS modules for radio coverage prediction for a number of different radio channel models, with antenna radiation patterns given in the standard MSI format. The results are stored in a data base (e.g. MySQL, PostgreSQL) for further processing and in a simplified form as a bit-map file for displaying in GRASS. The accuracy of prediction was confirmed by comparison with results obtained by a dedicated professional prediction tool as well as with measurement results. Key-Words: network planning tool, open-source, GRASS GIS, path loss, raster, clutter, radio signal coverage", "title": "" }, { "docid": "ea17334df645dabb38ff27ec1530566a", "text": "Recently, deep neural networks (DNN) have been incorporated into i-vector-based speaker recognition systems, where they have significantly improved state-of-the-art performance. In these systems, a DNN is used to collect sufficient statistics for i-vector extraction. In this study, the DNN is a recently developed time delay deep neural network (TDNN) that has achieved promising results in LVCSR tasks. We believe that the TDNN-based system achieves the best reported results on SRE10 and it obtains a 50% relative improvement over our GMM baseline in terms of equal error rate (EER). For some applications, the computational cost of a DNN is high. Therefore, we also investigate a lightweight alternative in which a supervised GMM is derived from the TDNN posteriors. This method maintains the speed of the traditional unsupervised-GMM, but achieves a 20% relative improvement in EER.", "title": "" } ]
scidocsrr
fa8fbf27801040f23a6c09007250ea1e
A framework for MDE of IoT-based manufacturing cyber-physical systems
[ { "docid": "4239d27174101a90374b48acf0a88325", "text": "Recent advances in manufacturing industry, and notably in the Industry 4.0 context, promote the development of CPSs and consequently give rise to a number of issues to be solved. The present paper describes the context of the extension of mechatronic systems to cyber-physical ones, firstly by highlighting their similarities and differences, and then by underlining the current needs for CPSs in the manufacturing sector. Then, the paper presents the main research issues related to CPS design and, in particular, the needs for an integrated and multi-scale designing approach to prevent conflicts across different design domains early enough within the CPS development process. To this aim, the impact of the extension from mechatronic to Cyber-Physical Systems on their design is examined through a set of existing related modelling techniques. The multi-scalability requirement of these techniques is firstly described, concerning external/internal interactions, process control, behaviour simulation, representation of topological relationships and interoperability through a multi-agent platform, and then applied to the case study of a tablets manufacturing process. Finally, the proposed holistic description of such a multi-scale manufacturing CPS allows to outline the main characteristics of a modelling-simulation platform, able notably to bridge the semantic gaps existing between the different designing disciplines and specialised domains. © 2016 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "2a2c48288a07523827080ffbd62c74b4", "text": "With a strikingly simple architecture and the ability to learn meaningful word embeddings efficiently from texts containing billions of words, word2vec remains one of the most popular neural language models used today. However, as only a single embedding is learned for every word in the vocabulary, the model fails to optimally represent words with multiple meanings and, additionally, it is not possible to create embeddings for new (out-of-vocabulary) words on the spot. Based on an intuitive interpretation of the continuous bag-of-words (CBOW) word2vec model’s negative sampling training objective in terms of predicting context based similarities, we motivate an extension of the model we call context encoders (ConEc). By multiplying the matrix of trained word2vec embeddings with a word’s average context vector, out-ofvocabulary (OOV) embeddings and representations for words with multiple meanings can be created based on the words’ local contexts. The benefits of this approach are illustrated by using these word embeddings as features in the CoNLL 2003 named entity recognition (NER) task.", "title": "" }, { "docid": "24b769f8ed2688bbe7621ad1eb317b8a", "text": "This paper presents a camera that samples the 4D light field on its sensor in a single photographic exposure. This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera. Each microlens measures not just the total amount of light deposited at that location, but how much light arrives along each ray. By re-sorting the measured rays of light to where they would have terminated in slightly different, synthetic cameras, we can compute sharp photographs focused at different depths. We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise. Especially in the macrophotography regime, we demonstrate that we can also compute synthetic photographs from a range of different viewpoints. These capabilities argue for a different strategy in designing photographic imaging systems. To the photographer, the plenoptic camera operates exactly like an ordinary hand-held camera. We have used our prototype to take hundreds of light field photographs, and we present examples of portraits, high-speed action and macro close-ups.", "title": "" }, { "docid": "90b21e8edcb993f472fe516dff22ae84", "text": "Urticaria is a kind of skin rash that sometimes caused by allergic reactions. Acute viral infection, stress, pressure, exercise and sunlight are some other causes of urticaria. However, chronic urticaria and angioedema could be either idiopathic or caused by autoimmune reaction. They last more than six weeks and could even persist for a very long time. It is thought that the level of C-reactive protein CRP increases and the level of Erythrocyte sedimentation rate (ESR) decreases in patients with chronic urticaria. Thirty four patients with chronic or recurrent urticaria were selected for the treatment with wet cupping. Six of them, because of having a history of recent infection/cold urticaria, were eliminated and the remaining 28 were chosen for this study. ESR and CRP were measured in these patients aged 21-59, comprising 12 females and 16 males, ranged from 5-24 mm/h for ESR with a median 11 mm/h and 3.3-31.2 mg/L with a median of 11.95 mg/L for CRP before and after phlebotomy (250-450mL) which was performed as a control for wet cupping therapy. Three weeks after phlebotomy, wet cupping was performed on the back of these patients between two shoulders and the levels of ESR and CRP were measured again three weeks after wet cupping. The changes were observed in the level of CRP and ESR after phlebotomy being negligible. However, the level of CRP with a median 11.95 before wet cupping dramatically dropped to 1.1 after wet cupping. The level ESR also with a median 11 before wet cupping rose to 15.5 after wet cupping therapy. The clear correlation between the urticaria/angioedema and the rise of CRP was observed as was anticipated. No recurrence has been observed on twenty five of these patients and three of them are still recovering from the lesions.", "title": "" }, { "docid": "350495750961199ae746ee17eb0ba819", "text": "Gynecologic emergencies are relatively common and include ectopic pregnancies, adnexal torsion, tubo-ovarian abscess, hemorrhagic ovarian cysts, gynecologic hemorrhage, and vulvovaginal trauma. The purpose of this article is to provide a concise review of these emergencies, focusing on the evaluation and treatment options for the patient. In many cases, other causes of an acute abdomen are in the differential diagnosis. Understanding the tenets of diagnosis helps the surgeon narrow the etiology and guide appropriate treatment.", "title": "" }, { "docid": "40e9c1a6bef4a8b0c2681b09afc528c9", "text": "360-Degree panoramic cameras have been widely used in the field of computer vision and virtual reality recently. The use of fisheye lens to actualize a panoramic camera has become the industry trend. Fisheye lens has large distortion, and fisheye images have to be unwarped and blended to get 360-degree panoramic images, which has become two difficulties in fisheye lens practice. In this paper, a set of automatic 360-degree panoramic image generation algorithm which can be easily realized is proposed to solve these difficulties. The result shows that this software method can achieve high quality and low cost.", "title": "" }, { "docid": "9456d36b9f8be6543010fd7e9865f63b", "text": "Time stamped texts or text sequences are ubiquitous in real life, such as news reports. Tracking the topic evolution of these texts has been an issue of considerable interest. Recent work has developed methods of tracking topic shifting over long time scales. However, most of these researches focus on a large corpus. Also, they only focus on the text itself and no attempt have been made to explore the temporal distribution of the corpus, which could provide meaningful and comprehensive clues for topic tracking. In this paper, we formally address this problem and put forward a novel method based on the topic model. We investigate the temporal distribution of news reports of a specific event and try to integrate this information with a topic model to enhance the performance of topic model. By focusing on a specific news event, we try to reveal more details about the event, such as, how many stages are there in the event, what aspect does each stage focus on, etc.", "title": "" }, { "docid": "bf2065f6c04f566110667a22a9d1b663", "text": "Casticin, a polymethoxyflavone occurring in natural plants, has been shown to have anticancer activities. In the present study, we aims to investigate the anti-skin cancer activity of casticin on melanoma cells in vitro and the antitumor effect of casticin on human melanoma xenografts in nu/nu mice in vivo. A flow cytometric assay was performed to detect expression of viable cells, cell cycles, reactive oxygen species production, levels of [Formula: see text] and caspase activity. A Western blotting assay and confocal laser microscope examination were performed to detect expression of protein levels. In the in vitro studies, we found that casticin induced morphological cell changes and DNA condensation and damage, decreased the total viable cells, and induced G2/M phase arrest. Casticin promoted reactive oxygen species (ROS) production, decreased the level of [Formula: see text], and promoted caspase-3 activities in A375.S2 cells. The induced G2/M phase arrest indicated by the Western blotting assay showed that casticin promoted the expression of p53, p21 and CHK-1 proteins and inhibited the protein levels of Cdc25c, CDK-1, Cyclin A and B. The casticin-induced apoptosis indicated that casticin promoted pro-apoptotic proteins but inhibited anti-apoptotic proteins. These findings also were confirmed by the fact that casticin promoted the release of AIF and Endo G from mitochondria to cytosol. An electrophoretic mobility shift assay (EMSA) assay showed that casticin inhibited the NF-[Formula: see text]B binding DNA and that these effects were time-dependent. In the in vivo studies, results from immuno-deficient nu/nu mice bearing the A375.S2 tumor xenograft indicated that casticin significantly suppressed tumor growth based on tumor size and weight decreases. Early G2/M arrest and mitochondria-dependent signaling contributed to the apoptotic A375.S2 cell demise induced by casticin. In in vivo experiments, A375.S2 also efficaciously suppressed tumor volume in a xenotransplantation model. Therefore, casticin might be a potential therapeutic agent for the treatment of skin cancer in the future.", "title": "" }, { "docid": "ade3c1fefc8dc408211929402d180ce6", "text": "Noise is currently the second most common complaint amongst restaurant-goers, behind poor service. In fact, over the last decade or two, many restaurants have become so loud that some critics now regularly report on the noise levels alongside the quality of the food. In this review, I first highlight the growing problem of noise in restaurants and bars and look at the possible causes. I then critically evaluate the laboratory-based research that has examined the effect of loud background noise on taste perception. I distinguish between the effect of noise on the taste, aroma/flavour, and textural properties of food and drink. Taken together, the evidence now clearly demonstrates that both background noise and loud music can impair our ability to taste food and drink. It would appear that noise selectively impairs the ability to detect tastes such as sweet and sour while leaving certain other taste and flavour experiences relatively unaffected. Possible neuroscientific explanations for such effects are outlined, and directions for future research highlighted. Finally, having identified the growing problem with noise in restaurants, I end by looking at some of the possible solutions and touch on the concept of silent dining.", "title": "" }, { "docid": "698abf5788520934edfbee8f74154825", "text": "A near-regular texture deviates geometrically and photometrically from a regular congruent tiling. Although near-regular textures are ubiquitous in the man-made and natural world, they present computational challenges for state of the art texture analysis and synthesis algorithms. Using regular tiling as our anchor point, and with user-assisted lattice extraction, we can explicitly model the deformation of a near-regular texture with respect to geometry, lighting and color. We treat a deformation field both as a function that acts on a texture and as a texture that is acted upon, and develop a multi-modal framework where each deformation field is subject to analysis, synthesis and manipulation. Using this formalization, we are able to construct simple parametric models to faithfully synthesize the appearance of a near-regular texture and purposefully control its regularity.", "title": "" }, { "docid": "71a262b1c91c89f379527b271e45e86e", "text": "Geospatial object detection from high spatial resolution (HSR) remote sensing imagery is a heated and challenging problem in the field of automatic image interpretation. Despite convolutional neural networks (CNNs) having facilitated the development in this domain, the computation efficiency under real-time application and the accurate positioning on relatively small objects in HSR images are two noticeable obstacles which have largely restricted the performance of detection methods. To tackle the above issues, we first introduce semantic segmentation-aware CNN features to activate the detection feature maps from the lowest level layer. In conjunction with this segmentation branch, another module which consists of several global activation blocks is proposed to enrich the semantic information of feature maps from higher level layers. Then, these two parts are integrated and deployed into the original single shot detection framework. Finally, we use the modified multi-scale feature maps with enriched semantics and multi-task training strategy to achieve end-to-end detection with high efficiency. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset have demonstrated the superiority of the presented method.", "title": "" }, { "docid": "74e40c5cb4e980149906495da850d376", "text": "Universal schema predicts the types of entities and relations in a knowledge base (KB) by jointly embedding the union of all available schema types—not only types from multiple structured databases (such as Freebase or Wikipedia infoboxes), but also types expressed as textual patterns from raw text. This prediction is typically modeled as a matrix completion problem, with one type per column, and either one or two entities per row (in the case of entity types or binary relation types, respectively). Factorizing this sparsely observed matrix yields a learned vector embedding for each row and each column. In this paper we explore the problem of making predictions for entities or entity-pairs unseen at training time (and hence without a pre-learned row embedding). We propose an approach having no per-row parameters at all; rather we produce a row vector on the fly using a learned aggregation function of the vectors of the observed columns for that row. We experiment with various aggregation functions, including neural network attention models. Our approach can be understood as a natural language database, in that questions about KB entities are answered by attending to textual or database evidence. In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.", "title": "" }, { "docid": "d487d83c805114cb36be664e48e3a588", "text": "Although motor imagery is widely used for motor learning in rehabilitation and sports training, the underlying mechanisms are still poorly understood. Based on fMRI data sets acquired with very high temporal resolution (300 ms) under motor execution and imagery conditions, we utilized Dynamic Causal Modeling (DCM) to determine effective connectivity measures between supplementary motor area (SMA) and primary motor cortex (M1). A set of 28 models was tested in a Bayesian framework and the by-far best-performing model revealed a strong suppressive influence of the motor imagery condition on the forward connection between SMA and M1. Our results clearly indicate that the lack of activation in M1 during motor imagery is caused by suppression from the SMA. These results highlight the importance of the SMA not only for the preparation and execution of intended movements, but also for suppressing movements that are represented in the motor system but not to be performed.", "title": "" }, { "docid": "e59b203f3b104553a84603240ea467eb", "text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.", "title": "" }, { "docid": "a9fbabe9366f1c416a065849ccf499eb", "text": "BACKGROUND\nAtherosclerotic cardiovascular disease and malnutrition are widely recognized as leading causes of the increased morbidity and mortality observed in uremic patients. C-reactive protein (CRP), an acute-phase protein, is a predictor of cardiovascular mortality in nonrenal patient populations. In chronic renal failure (CRF), the prevalence of an acute-phase response has been associated with an increased mortality.\n\n\nMETHODS\nOne hundred and nine predialysis patients (age 52 +/- 1 years) with terminal CRF (glomerular filtration rate 7 +/- 1 ml/min) were studied. By using noninvasive B-mode ultrasonography, the cross-sectional carotid intima-media area was calculated, and the presence or absence of carotid plaques was determined. Nutritional status was assessed by subjective global assessment (SGA), dual-energy x-ray absorptiometry (DXA), serum albumin, serum creatinine, serum urea, and 24-hour urine urea excretion. The presence of an inflammatory reaction was assessed by CRP, fibrinogen (N = 46), and tumor necrosis factor-alpha (TNF-alpha; N = 87). Lipid parameters, including Lp(a) and apo(a)-isoforms, as well as markers of oxidative stress (autoantibodies against oxidized low-density lipoprotein and vitamin E), were also determined.\n\n\nRESULTS\nCompared with healthy controls, CRF patients had an increased mean carotid intima-media area (18.3 +/- 0.6 vs. 13.2 +/- 0.7 mm2, P < 0.0001) and a higher prevalence of carotid plaques (72 vs. 32%, P = 0.001). The prevalence of malnutrition (SGA 2 to 4) was 44%, and 32% of all patients had an acute-phase response (CRP > or = 10 mg/liter). Malnourished patients had higher CRP levels (23 +/- 3 vs. 13 +/- 2 mg/liter, P < 0.01), elevated calculated intima-media area (20.2 +/- 0.8 vs. 16.9 +/- 0.7 mm2, P < 0.01) and a higher prevalence of carotid plaques (90 vs. 60%, P < 0.0001) compared with well-nourished patients. During stepwise multivariate analysis adjusting for age and gender, vitamin E (P < 0.05) and CRP (P < 0.05) remained associated with an increased intima-media area. The presence of carotid plaques was significantly associated with age (P < 0.001), log oxidized low-density lipoprotein (oxLDL; P < 0.01), and small apo(a) isoform size (P < 0.05) in a multivariate logistic regression model.\n\n\nCONCLUSION\nThese results indicate that the rapidly developing atherosclerosis in advanced CRF appears to be caused by a synergism of different mechanisms, such as malnutrition, inflammation, oxidative stress, and genetic components. Apart from classic risk factors, low vitamin E levels and elevated CRP levels are associated with an increased intima-media area, whereas small molecular weight apo(a) isoforms and increased levels of oxLDL are associated with the presence of carotid plaques.", "title": "" }, { "docid": "5339bd241f053214673ead767476077d", "text": "----------------------------------------------------------------------ABSTRACT----------------------------------------------------------This paper is a general survey of all the security issues existing in the Internet of Things (IoT) along with an analysis of the privacy issues that an end-user may face as a consequence of the spread of IoT. The majority of the survey is focused on the security loopholes arising out of the information exchange technologies used in Internet of Things. No countermeasure to the security drawbacks has been analyzed in the paper.", "title": "" }, { "docid": "9653346c41cab4e22c9987586bb155c1", "text": "The focus of the great majority of climate change impact studies is on changes in mean climate. In terms of climate model output, these changes are more robust than changes in climate variability. By concentrating on changes in climate means, the full impacts of climate change on biological and human systems are probably being seriously underestimated. Here, we briefly review the possible impacts of changes in climate variability and the frequency of extreme events on biological and food systems, with a focus on the developing world. We present new analysis that tentatively links increases in climate variability with increasing food insecurity in the future. We consider the ways in which people deal with climate variability and extremes and how they may adapt in the future. Key knowledge and data gaps are highlighted. These include the timing and interactions of different climatic stresses on plant growth and development, particularly at higher temperatures, and the impacts on crops, livestock and farming systems of changes in climate variability and extreme events on pest-weed-disease complexes. We highlight the need to reframe research questions in such a way that they can provide decision makers throughout the food system with actionable answers, and the need for investment in climate and environmental monitoring. Improved understanding of the full range of impacts of climate change on biological and food systems is a critical step in being able to address effectively the effects of climate variability and extreme events on human vulnerability and food security, particularly in agriculturally based developing countries facing the challenge of having to feed rapidly growing populations in the coming decades.", "title": "" }, { "docid": "d7b689bb8794897134f9024ff98d561b", "text": "The Gomoku board game is a longstanding challenge for artificial intelligence research. With the development of deep learning, move prediction can help to promote the intelligence of board game agents as proven in AlphaGo. Following this idea, we train deep convolutional neural networks by supervised learning to predict the moves made by expert Gomoku players from RenjuNet dataset. We put forward a number of deep neural networks with different architectures and different hyperparameters to solve this problem. With only the board state as the input, the proposed deep convolutional neural networks are able to recognize some special features of Gomoku and select the most likely next move. The final neural network achieves the accuracy of move prediction of about 42% on the RenjuNet dataset, which reaches the level of expert Gomoku players. In addition, it is promising to generate strong Gomoku agents of human-level with the move prediction as a guide.", "title": "" }, { "docid": "b48d9e46a22fce04dac6949b08a7673c", "text": "Khadtare Y, Chaudhari A, Waghmare P, Prashant S. (laser-assisted new attachment procedure) The LANAP Protocol A Minimally Invasive Bladeless Procedure. J Periodontol Med Clin Pract 2014;01: 264-271 1 2 2 3 Dr. Yogesh Khadtare , Dr. Amit Chaudhari , Dr. Pramod Waghmare , Dr. Shekhar Prashant Review Article Journal of Periodontal Medicine & Clinical Practice JPMCP Journal of Periodontal Medicine & Clinical Practice", "title": "" }, { "docid": "53598a996f31476b32871cf99f6b84f0", "text": "The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track included three tasks involving: (1A) identifying relationships between citing documents and the referred document, (1B) classifying the discourse facets, and (2) generating the abstractive summary. The dataset comprised 30 annotated sets of citing and reference papers from the open access research papers in the CL domain. This overview paper describes the participation and the official results of the second CL-SciSumm Shared Task, organized as a part of the Joint Workshop onBibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2016), held in New Jersey,USA in June, 2016. The annotated dataset used for this shared task and the scripts used for evaluation can be accessed and used by the community at: https://github.com/WING-NUS/scisumm-corpus.", "title": "" }, { "docid": "a45dbfbea6ff33d920781c07dac0442b", "text": "Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We discuss implications for the use of our findings in real-world context-aware applications.", "title": "" } ]
scidocsrr
1259304b535b4b46049841cf8d700463
Generalized Grounding Graphs: A Probabilistic Framework for Understanding Grounded Commands
[ { "docid": "da69ac86355c5c514f7e86a48320dcb3", "text": "Current approaches to semantic parsing, the task of converting text to a formal meaning representation, rely on annotated training data mapping sentences to logical forms. Providing this supervision is a major bottleneck in scaling semantic parsers. This paper presents a new learning paradigm aimed at alleviating the supervision burden. We develop two novel learning algorithms capable of predicting complex structures which only rely on a binary feedback signal based on the context of an external world. In addition we reformulate the semantic parsing problem to reduce the dependency of the model on syntactic patterns, thus allowing our parser to scale better using less supervision. Our results surprisingly show that without using any annotated meaning representations learning with a weak feedback signal is capable of producing a parser that is competitive with fully supervised parsers.", "title": "" } ]
[ { "docid": "d8d17aa5e709ebd4dda676eadb531ef3", "text": "The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this paper, we propose an end-to-end feature learning strategy integrating discriminative information with various granularities. We carefully design the Multiple Granularity Network (MGN), a multi-branch deep network architecture consisting of one branch for global feature representations and two branches for local feature representations. Instead of learning on semantic regions, we uniformly partition the images into several stripes, and vary the number of parts in different local branches to obtain local feature representations with multiple granularities. Comprehensive experiments implemented on the mainstream evaluation datasets including Market-1501, DukeMTMC-reid and CUHK03 indicate that our method robustly achieves state-of-the-art performances and outperforms any existing approaches by a large margin. For example, on Market-1501 dataset in single query mode, we obtain a top result of Rank-1/mAP=96.6%/94.2% with this method after re-ranking.", "title": "" }, { "docid": "b252aea38a537a22ab34fdf44e9443d2", "text": "The objective of this study is to describe the case of a patient presenting advanced epidermoid carcinoma of the penis associated to myiasis. A 41-year-old patient presenting with a necrotic lesion of the distal third of the penis infested with myiasis was attended in the emergency room of our hospital and was submitted to an urgent penectomy. This is the first case of penile cancer associated to myiasis described in the literature. This case reinforces the need for educative campaigns to reduce the incidence of this disease in developing countries.", "title": "" }, { "docid": "57c705e710f99accab3d9242fddc5ac8", "text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.", "title": "" }, { "docid": "72e97d0f9f4ca19e4654e69b93729d71", "text": "In this paper, we propose a novel cross-space affinity learning algorithm over different spaces with heterogeneous structures. Unlike most of affinity learning algorithms on the homogeneous space, we construct a cross-space tensor model to learn the affinity measures on heterogeneous spaces subject to a set of order constraints from the training pool. We further enhance the model with a factorization form which greatly reduces the number of parameters of the model with a controlled complexity. Moreover, from the practical perspective, we show the proposed factorized cross-space tensor model can be efficiently optimized by a series of simple quadratic optimization problems in an iterative manner. The proposed cross-space affinity learning algorithm can be applied to many real-world problems, which involve multiple heterogeneous data objects defined over different spaces. In this paper, we apply it into the recommendation system to measure the affinity between users and the product items, where a higher affinity means a higher rating of the user on the product. For an empirical evaluation, a widely used benchmark movie recommendation data set-MovieLens-is used to compare the proposed algorithm with other state-of-the-art recommendation algorithms and we show that very competitive results can be obtained.", "title": "" }, { "docid": "e902cdc8d2e06d7dd325f734b0a289b6", "text": "Vaccinium arctostaphylos is a traditional medicinal plant in Iran used for the treatment of diabetes mellitus. In our search for antidiabetic compounds from natural sources, we found that the extract obtained from V. arctostaphylos berries showed an inhibitory effect on pancreatic alpha-amylase in vitro [IC50 = 1.91 (1.89-1.94) mg/mL]. The activity-guided purification of the extract led to the isolation of malvidin-3-O-beta-glucoside as an a-amylase inhibitor. The compound demonstrated a dose-dependent enzyme inihibitory activity [IC50 = 0.329 (0.316-0.342) mM].", "title": "" }, { "docid": "1446cad5eead8ab66cee4c3a11caac07", "text": "Patients turn to Online Health Communities not only for information on specific conditions but also for emotional support. Previous research has indicated that the progression of emotional status can be studied through the linguistic patterns of an individual’s posts. We analyze a realworld dataset from the Mental Health section of healthboards.com. Estimated from the word usages in their posts, we find that the emotional progress across patients vary widely. We study the problem of predicting a patient’s emotional status in the future from her past posts and we propose a Recurrent Neural Network (RNN) based architecture to address it. We find that the future emotional status can be predicted with reasonable accuracy given her historical posts and participation features. Our evaluation results demonstrate the efficacy of our proposed architecture, by outperforming state-of-the-art approaches with over 0.13 reduction in Mean Absolute Error.", "title": "" }, { "docid": "5179662c841302180848dc566a114f10", "text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.", "title": "" }, { "docid": "16e2f269c21eaf2bf856bb0996ab8135", "text": "In this paper, we present a cryptographic technique for an authenticated, end-to-end verifiable and secret ballot election. Voters should receive assurance that their vote is cast as intended, recorded as cast and tallied as recorded. The election system as a whole should ensure that voter coercion is unlikely, even when voters are willing to be influenced. Currently, almost all verifiable e-voting systems require trusted authorities to perform the tallying process. An exception is the DRE-i and DRE-ip system. The DRE-ip system removes the requirement of tallying authorities by encrypting ballot in such a way that the election tally can be publicly verified without decrypting cast ballots. However, the DRE-ip system necessitates a secure bulletin board (BB) for storing the encrypted ballot as without it the integrity of the system may be lost and the result can be compromised without detection during the audit phase. In this paper, we have modified the DRE-ip system so that if any recorded ballot is tampered by an adversary before the tallying phase, it will be detected during the tallying phase. In addition, we have described a method using zero knowledge based public blockchain to store these ballots so that it remains tamper proof. To the best of our knowledge, it is the first end-toend verifiable Direct-recording electronic (DRE) based e-voting system using blockchain. In our case, we assume that the bulletin board is insecure and an adversary has read and write access to the bulletin board. We have also added a secure biometric with government provided identity card based authentication mechanism for voter authentication. The proposed system is able to encrypt ballot in such a way that the election tally can be publicly verified without decrypting cast ballots maintaining end-to-end verifiability and without requiring the secure bulletin board.", "title": "" }, { "docid": "bfb0de9970cf1970f98c4fa78c2ec4d7", "text": "The problem of matching between binaries is important for software copyright enforcement as well as for identifying disclosed vulnerabilities in software. We present a search engine prototype called Rendezvous which enables indexing and searching for code in binary form. Rendezvous identifies binary code using a statistical model comprising instruction mnemonics, control flow sub-graphs and data constants which are simple to extract from a disassembly, yet normalising with respect to different compilers and optimisations. Experiments show that Rendezvous achieves F2 measures of 86.7% and 83.0% on the GNU C library compiled with different compiler optimisations and the GNU coreutils suite compiled with gcc and clang respectively. These two code bases together comprise more than one million lines of code. Rendezvous will bring significant changes to the way patch management and copyright enforcement is currently performed.", "title": "" }, { "docid": "5b04ad90f2699075a9afabe89748f2b5", "text": "The explosive growth of micro-blogging sites such as Twitter has enabled folks to share their personal up-to-dates. Compared to conventional blog sites, through the short length of messages, micro-blogging sites help users easily express their experiences, thoughts and feelings and share them instantly and globally. In addition, mobile devices based micro-blogging applications are ensuring the usefulness in a variety of our daily activities without spatial or temporal restriction. Especially, the most significant characteristics chiefly possible in such mobile micro-blogging is on the fact that the cutting-edge smartphones can utilize location sensing information that make it clear to analyze where the published messages are made almost in real time. In the respect of the diversity and the quantity of crowds writing the micro-blogs, we are sure that the micro-blogging sites can be a very important social media platform where a lot of valuable knowledge such as geographic social phenomena can be extracted. In this paper, we endeavor to find geographic social patterns from user movement histories made by mass mobile micro-bloggers. We particularly propose fundamental models based on aggregation and dispersion about movements of micro-bloggers in geographic regions. We also performed experiments to discover geographic characteristics from the micro-blog data actually gathered from Twitter.", "title": "" }, { "docid": "1fba9ed825604e8afde8459a3d3dc0c0", "text": "Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a \"learning via translation\" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.", "title": "" }, { "docid": "51b91ef1b46d6696a0e99eb8649d6447", "text": "A solid-state drive (SSD) gains fast I/O speed and is becoming an ideal replacement for traditional rotating storage. However, its speed and responsiveness heavily depend on internal fragmentation. With a high degree of fragmentation, an SSD may experience sharp performance degradation. Hence, minimizing fragmentation in the SSD is an effective way to sustain its high performance. In this paper, we propose an innovative file data placement strategy for Rocks DB, a widely used embedded NoSQL database. The proposed strategy steers data to a write unit exposed by an SSD according to predicted data lifetime. By placing data with similar lifetime in the same write unit, fragmentation in the SSD is controlled at the time of data write. We evaluate our proposed strategy using the Yahoo! Cloud Serving Benchmark. Our experimental results demonstrate that the proposed strategy improves the Rocks DB performance significantly: the throughput can be increased by up to 41%, 99.99%ile latency reduced by 59%, and SSD lifetime extended by up to 18%.", "title": "" }, { "docid": "f6266e5c4adb4fa24cc353dccccaf6db", "text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.", "title": "" }, { "docid": "634ded02136fef04ec8c64a819522e7b", "text": "Maintaining appropriate levels of food intake anddeveloping regularity in eating habits is crucial to weight lossand the preservation of a healthy lifestyle. Moreover, maintainingawareness of one's own eating habits is an important steptowards portion control and ultimately, weight loss. Though manysolutions have been proposed in the area of physical activitymonitoring, few works attempt to monitor an individual's foodintake by means of a noninvasive, wearable platform. In thispaper, we introduce a novel nutrition-intake monitoring systembased around a wearable, mobile, wireless-enabled necklacefeaturing an embedded piezoelectric sensor. We also propose aframework capable of estimating volume of meals, identifyinglong-term trends in eating habits, and providing classificationbetween solid foods and liquids with an F-Measure of 85% and86% respectively. The data is presented to the user in the formof a mobile application.", "title": "" }, { "docid": "b045350bfb820634046bff907419d1bf", "text": "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.", "title": "" }, { "docid": "d3c5a15b14ab5f4a44223e7e571e412e", "text": "− Instead of minimizing the observed training error, Support Vector Regression (SVR) attempts to minimize the generalization error bound so as to achieve generalized performance. The idea of SVR is based on the computation of a linear regression function in a high dimensional feature space where the input data are mapped via a nonlinear function. SVR has been applied in various fields – time series and financial (noisy and risky) prediction, approximation of complex engineering analyses, convex quadratic programming and choices of loss functions, etc. In this paper, an attempt has been made to review the existing theory, methods, recent developments and scopes of SVR.", "title": "" }, { "docid": "f6df414f8f61dbdab32be2f05d921cb8", "text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.", "title": "" }, { "docid": "6b130d9179bbf640644423e67289b29b", "text": "Although both reaching and grasping require transporting the hand to the object location, only grasping also requires processing of object shape, size and orientation to preshape the hand. Behavioural and neuropsychological evidence suggests that the object processing required for grasping relies on different neural substrates from those mediating object recognition. Specifically, whereas object recognition is believed to rely on structures in the ventral (occipitotemporal) stream, object grasping appears to rely on structures in the dorsal (occipitoparietal) stream. We used functional magnetic resonance imaging (fMRI) to determine whether grasping (compared to reaching) produced activation in dorsal areas, ventral areas, or both. We found greater activity for grasping than reaching in several regions, including anterior intraparietal (AIP) cortex. We also performed a standard object perception localizer (comparing intact vs. scrambled 2D object images) in the same subjects to identify the lateral occipital complex (LOC), a ventral stream area believed to play a critical role in object recognition. Although LOC was activated by the objects presented on both grasping and reaching trials, there was no greater activity for grasping compared to reaching. These results suggest that dorsal areas, including AIP, but not ventral areas such as LOC, play a fundamental role in computing object properties during grasping.", "title": "" } ]
scidocsrr
c383ee145450ee6b49686190400db965
An Optimal Algorithm for Approximate Nearest Neighbor Searching Fixed Dimensions
[ { "docid": "662b1ec9e2481df760c19567ce635739", "text": "Semantic versus nonsemantic information icture yourself as a fashion designer needing images of fabrics with a particular mixture of colors, a museum cataloger looking P for artifacts of a particular shape and textured pattern, or a movie producer needing a video clip of a red car-like object moving from right to left with the camera zooming. How do you find these images? Even though today’s technology enables us to acquire, manipulate, transmit, and store vast on-line image and video collections, the search methodologies used to find pictorial information are still limited due to difficult research problems (see “Semantic versus nonsemantic” sidebar). Typically, these methodologies depend on file IDS, keywords, or text associated with the images. And, although powerful, they", "title": "" } ]
[ { "docid": "6121b76159c55cc8dbaebd5213c874b1", "text": "In this paper, a 320 × 240 pixel, 80 frame/s CMOS image sensor with a low power dual correlated double sampling (CDS) scheme is presented. A novel 8-bit hold-and-go counter in each column is proposed to obtain 10-bit resolution. Furthermore, dual CDS and a configurable counter scheme are also discussed to realize efficient power reduction. With these techniques, the digital counter consumes at least 43% and at most 61% less power compared with the column-counters type, and the frame rate is approximately 40% faster than the double memory type due to a partial pipeline structure without additional memories. The prototype sensor was fabricated in a Samsung 0.13 μm 1P4M CMOS process and used a 4T APS with a pixel pitch of 2.25 μm. The measured column fixed pattern noise (FPN)", "title": "" }, { "docid": "e602cb626418ff3dbb38fd171bfd359e", "text": "File carving is an important technique for digital forensics investigation and for simple data recovery. By using a database of headers and footers (essentially, strings of bytes at predictable offsets) for specific file types, file carvers can retrieve files from raw disk images, regardless of the type of filesystem on the disk image. Perhaps more importantly, file carving is possible even if the filesystem metadata has been destroyed. This paper presents some requirements for high performance file carving, derived during design and implementation of Scalpel, a new open source file carving application. Scalpel runs on machines with only modest resources and performs carving operations very rapidly, outperforming most, perhaps all, of the current generation of carving tools. The results of a number of experiments are presented to support this assertion.", "title": "" }, { "docid": "d90bc873f154cd66823c3a1d7cb1c8bf", "text": "In this paper we used two new features i.e. T-wave integral and total integral as extracted feature from one cycle of normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our previous work we used some features of body surface potential map data for this aim. But we know the standard ECG is more popular, so we focused our detection and localization of MI on standard ECG. We use the T-wave integral because this feature is important impression of T-wave in MI. The second feature in this research is total integral of one ECG cycle, because we believe that the MI affects the morphology of the ECG signal which leads to total integral changes. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI, because this method has very good accuracy for classification of normal signal and abnormal signal. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 76% for accuracy in test data for localization and over 94% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve the accuracy of classification by adding more features in this method. A simple method based on using only two features which were extracted from standard ECG is presented and has good accuracy in MI localization.", "title": "" }, { "docid": "4e11d69f17272fdeaf03be2db4b7e982", "text": "We present a method for spotting words in the wild, i.e., in real images taken in unconstrained environments. Text found in the wild has a surprising range of difficulty. At one end of the spectrum, Optical Character Recognition (OCR) applied to scanned pages of well formatted printed text is one of the most successful applications of computer vision to date. At the other extreme lie visual CAPTCHAs – text that is constructed explicitly to fool computer vision algorithms. Both tasks involve recognizing text, yet one is nearly solved while the other remains extremely challenging. In this work, we argue that the appearance of words in the wild spans this range of difficulties and propose a new word recognition approach based on state-of-the-art methods from generic object recognition, in which we consider object categories to be the words themselves. We compare performance of leading OCR engines – one open source and one proprietary – with our new approach on the ICDAR Robust Reading data set and a new word spotting data set we introduce in this paper: the Street View Text data set. We show improvements of up to 16% on the data sets, demonstrating the feasibility of a new approach to a seemingly old problem.", "title": "" }, { "docid": "47a13e29a0b87133b2da7ba3b6e82ff1", "text": "Current multimodal deep learning approaches rarely explicitly exploit the dependencies inherent in multiple labels, which are crucial for multimodal multi-label classification. In this paper, we propose a multimodal deep learning approach for multi-label classification. Specifically, we introduce deep networks for feature representation learning and construct classifiers with the objective function which is constrained with dependencies among both labels and modals. We further propose effective training algorithm to learn deep networks and classifiers jointly. Thus, we explicitly leverage the relations among labels and modals to facilitate multimodal multi-label classification. Experiments of multi-label classification and cross-modal retrieval on the Pascal VOC dataset and the La-belMe dataset demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "59d106a74ff2d0c11797b45b8fd7212a", "text": "We explore the risks to security and privacy in IoT networks by setting up an inexpensive home automation network and performing a set of experiments intended to study attacks and defenses. We focus on privacy preservation in home automation networks but our insights can extend to other IoT applications. Privacy preservation is fundamental to achieving the promise of IoT, Industrial Internet and M2M. We look at both simple cryptographic techniques and information manipulation to protect a user against an adversary inside the IoT network or an adversary that has compromised remote servers. We show how user data can be masked or selectively leaked and manipulated. We provide a blueprint for inexpensive study of IoT security and privacy using COTS products and services.", "title": "" }, { "docid": "e8814bda5323d76f9912843e1f9d0b3e", "text": "This paper develops a new principled framework for exploiting time-sensitive information to improve the truth discovery accuracy in social sensing applications. This work is motivated by the emergence of social sensing as a new paradigm of collecting observations about the physical environment from humans or devices on their behalf. These observations maybe true or false, and hence are viewed as binary claims. A fundamental problem in social sensing applications lies in ascertaining the correctness of claims and the reliability of data sources. We refer to this problem as truth discovery. Time is a critical dimension that needs to be carefully exploited in the truth discovery solutions. In this paper, we develop a new time-sensitive truth discovery scheme that explicitly incorporates the source responsiveness and the claim lifespan into a rigorous analytical framework. The new truth discovery scheme solves a maximum likelihood estimation problem to determine both the claim correctness and the source reliability. We compare our time-sensitive scheme with the state-of-the-art baselines through an extensive simulation study and a real world case study. The evaluation results showed that our new scheme outperforms all compared baselines and significantly improves the truth discovery accuracy in social sensing applications.", "title": "" }, { "docid": "1e5956b0d9d053cd20aad8b53730c969", "text": "The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \"the fog\". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.", "title": "" }, { "docid": "26ec7042ef44ca5620cf2deaa5247c5b", "text": "In today's days, due to increase in number of vehicles the probability of accidents are also increasing. The user should be aware of the road circumstances for safety purpose. Several methods requires installing dedicated hardware in vehicle which are expensive. so we have designed a Smart-phone based method which uses a Accelerometer and GPS sensors to analyze the road conditions. The designed system is called as Bumps Detection System(BDS) which uses Accelerometer for pothole detection and GPS for plotting the location of potholes on Google Map. Drivers will be informed in advance about count of potholes on road. we have assumed some threshold values on z-axis(Experimentally Derived)while designing the system. To justify these threshold values we have used a machine learning approach. The k means clustering algorithm is applied on the training data to build a model. Random forest classifier is used to evaluate this model on the test data for better prediction.", "title": "" }, { "docid": "15fb8b92428ce4f2c06d926fd323e9ef", "text": "Convolutional Neural Network (CNN) is one of the most effective neural network model for many classification tasks, such as voice recognition, computer vision and biological information processing. Unfortunately, Computation of CNN is both memory-intensive and computation-intensive, which brings a huge challenge to the design of the hardware accelerators. A large number of hardware accelerators for CNN inference are designed by the industry and the academia. Most of the engines are based on 32-bit floating point matrix multiplication, where the data precision is over-provisioned for inference job and the hardware cost are too high. In this paper, a 8-bit fixed-point LeNet inference engine (Laius) is designed and implemented on FPGA. In order to reduce the consumption of FPGA resource, we proposed a methodology to find the optimal bit-length for weight and bias in LeNet, which results in using 8-bit fixed point for most of the computation and using 16-bit fixed point for other computation. The PE (Processing Element) design is proposed. Pipelining and PE tiling technique is use to improve the performance of the inference engine. By theoretical analysis, we came to the conclusion that DSP resource in FPGA is the most critical resource, it should be carefully used during the design process. We implement the inference engine on Xilinx 485t FPGA. Experiment result shows that the designed LeNet inference engine can achieve 44.9 Gops throughput with 8-bit fixed-point operation after pipelining. Moreover, with only 1% loss of accuracy, the 8-bit fixed-point engine largely reduce 31.43% in latency, 87.01% in LUT consumption, 66.50% in BRAM consumption, 65.11% in DSP consumption and 47.95% reduction in power compared to a 32-bit fixed-point inference engine with the same structure.", "title": "" }, { "docid": "e89acdeb493d156390851a2a57231baf", "text": "Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents’ messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.1", "title": "" }, { "docid": "cc56477e8cf8f15018a28bca352380ef", "text": "This paper presents the application of three di1erent types of neural networks to the 2-D pattern recognition on the basis of its shape. They include the multilayer perceptron (MLP), Kohonen self-organizing network and hybrid structure composed of the self-organizing layer and the MLP subnetwork connected in cascade. The recognition is based on the features extracted from the Fourier and wavelet transformations of the data, describing the shape of the pattern. Application of di1erent neural network structures associated with di1erent preprocessing of the data results in di1erent accuracy of recognition and classi9cation. The numerical experiments performed for the recognition of simulated shapes of the airplanes have shown the superiority of the wavelet preprocessing associated with the self-organizing neural network structure. The integration of the individual classi9ers based on the weighted summation of the signals from the neural networks has been proposed and checked in numerical experiments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "a5274779804272ffc76edfa9b47ef805", "text": "World energy demand is expected to increase due to the expanding urbanization, better living standards and increasing population. At a time when society is becoming increasingly aware of the declining reserves of fossil fuels beside the environmental concerns, it has become apparent that biodiesel is destined to make a substantial contribution to the future energy demands of the domestic and industrial economies. There are different potential feedstocks for biodiesel production. Non-edible vegetable oils which are known as the second generation feedstocks can be considered as promising substitutions for traditional edible food crops for the production of biodiesel. The use of non-edible plant oils is very significant because of the tremendous demand for edible oils as food source. Moreover, edible oils’ feedstock costs are far expensive to be used as fuel. Therefore, production of biodiesel from non-edible oils is an effective way to overcome all the associated problems with edible oils. However, the potential of converting non-edible oil into biodiesel must be well examined. This is because physical and chemical properties of biodiesel produced from any feedstock must comply with the limits of ASTM and DIN EN specifications for biodiesel fuels. This paper introduces non-edible vegetable oils to be used as biodiesel feedstocks. Several aspects related to these feedstocks have been reviewed from various recent publications. These aspects include overview of non-edible oil resources, advantages of non-edible oils, problems in exploitation of non-edible oils, fatty acid composition profiles (FAC) of various non-edible oils, oil extraction techniques, technologies of biodiesel production from non-edible oils, biodiesel standards and characterization, properties and characteristic of non-edible biodiesel and engine performance and emission production. As a conclusion, it has been found that there is a huge chance to produce biodiesel from non-edible oil sources and therefore it can boost the future production of biodiesel. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b9a4a9fbc299684fc735e6e91211aecd", "text": "This paper proposes a new isolated word recognition technique based on a combination of instantaneous and dynamic features of the speech spectrum. This technique is shown to be highly effective in speaker-independent speech recognition. Spoken utterances are represented by time sequences of cepstrum coefficients and energy. Regression coefficients for these time functions are extracted for every frame over an approximately 50 ms period. Time functions of regression coefficients extracted for cepstrum and energy are combined with time functions of the original cepstrum coefficients, and used with a staggered array DP matching algorithm to compare multiple templates and input speech. Speaker-independent isolated word recognition experiments using a vocabulary of 100 Japanese city names indicate that a recognition error rate of 2.4 percent can be obtained with this method. Using only the original cepstrum coefficients the error rate is 6.2 percent. D", "title": "" }, { "docid": "cfa58ab168beb2d52fe6c2c47488e93a", "text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.", "title": "" }, { "docid": "25c2212a923038644fa93bba0dd9d7b8", "text": "Qualitative research aims to address questions concerned with developing an understanding of the meaning and experience dimensions of humans' lives and social worlds. Central to good qualitative research is whether the research participants' subjective meanings, actions and social contexts, as understood by them, are illuminated. This paper aims to provide beginning researchers, and those unfamiliar with qualitative research, with an orientation to the principles that inform the evaluation of the design, conduct, findings and interpretation of qualitative research. It orients the reader to two philosophical perspectives, the interpretive and critical research paradigms, which underpin both the qualitative research methodologies most often used in mental health research, and how qualitative research is evaluated. Criteria for evaluating quality are interconnected with standards for ethics in qualitative research. They include principles for good practice in the conduct of qualitative research, and for trustworthiness in the interpretation of qualitative data. The paper reviews these criteria, and discusses how they may be used to evaluate qualitative research presented in research reports. These principles also offer some guidance about the conduct of sound qualitative research for the beginner qualitative researcher.", "title": "" }, { "docid": "6c9d84ced9dd23cdb7542a50f1459fef", "text": "This article outlines a framework for the analysis of economic integration and its relation to the asymmetries of economic and social development. Consciously breaking with state-centric forms of social science, it argues for a research agenda that is more adequate to the exigencies and consequences of globalisation than has traditionally been the case in 'development studies'. Drawing on earlier attempts to analyse the crossborder activities of firms, their spatial configurations and developmental consequences, the article moves beyond these by proposing the framework of the 'global production network' (GPN). It explores the conceptual elements involved in this framework in some detail and then turns to sketch a stylised example of a GPN. The article concludes with a brief indication of the benefits that could be delivered be research informed by GPN analysis.", "title": "" }, { "docid": "e3b92d76bb139d0601c85416e8afaca4", "text": "Conventional supervised object recognition methods have been investigated for many years. Despite their successes, there are still two suffering limitations: (1) various information of an object is represented by artificial features only derived from RGB images, (2) lots of manually labeled data is required by supervised learning. To address those limitations, we propose a new semi-supervised learning framework based on RGB and depth (RGB-D) images to improve object recognition. In particular, our framework has two modules: (1) RGB and depth images are represented by convolutional-recursive neural networks to construct high level features, respectively, (2) co-training is exploited to make full use of unlabeled RGB-D instances due to the existing two independent views. Experiments on the standard RGB-D object dataset demonstrate that our method can compete against with other state-of-the-art methods with only 20% labeled data.", "title": "" }, { "docid": "5215451bb41f28f1e568a4447a2945da", "text": "Impulsivity is the tendency to act prematurely without foresight. Behavioral and neurobiological analysis of this construct, with evidence from both animal and human studies, defines several dissociable forms depending on distinct cortico-striatal substrates. One form of impulsivity depends on the temporal discounting of reward, another on motor or response disinhibition. Impulsivity is commonly associated with addiction to drugs from different pharmacological classes, but its causal role in human addiction is unclear. We characterize in neurobehavioral and neurochemical terms a rodent model of impulsivity based on premature responding in an attentional task. Evidence is surveyed that high impulsivity on this task precedes the escalation subsequently of cocaine self-administration behavior, and also a tendency toward compulsive cocaine-seeking and to relapse. These results indicate that the vulnerability to stimulant addiction may depend on an impulsivity endophenotype. Implications of these findings for the etiology, development, and treatment of drug addiction are considered.", "title": "" }, { "docid": "e13d6cd043ea958e9731c99a83b6de18", "text": "In this article, an overview and an in-depth analysis of the most discussed 5G waveform candidates are presented. In addition to general requirements, the nature of each waveform is revealed including the motivation, the underlying methodology, and the associated advantages and disadvantages. Furthermore, these waveform candidates are categorized and compared both qualitatively and quantitatively. By doing all these, the study in this work offers not only design guidelines but also operational suggestions for the 5G waveform.", "title": "" } ]
scidocsrr
47cd43d14de67a0c9a6d0ce2f93d773a
Aérgia: exploiting packet latency slack in on-chip networks
[ { "docid": "b27d9ddc450ed71497d70ebb7f31d7a8", "text": "Cores in a chip-multiprocessor (CMP) system share multiple hardware resources in the memory subsystem. If resource sharing is unfair, some applications can be delayed significantly while others are unfairly prioritized. Previous research proposed separate fairness mechanisms in each individual resource. Such resource-based fairness mechanisms implemented independently in each resource can make contradictory decisions, leading to low fairness and loss of performance. Therefore, a coordinated mechanism that provides fairness in the entire shared memory system is desirable.\n This paper proposes a new approach that provides fairness in the entire shared memory system, thereby eliminating the need for and complexity of developing fairness mechanisms for each individual resource. Our technique, Fairness via Source Throttling (FST), estimates the unfairness in the entire shared memory system. If the estimated unfairness is above a threshold set by system software, FST throttles down cores causing unfairness by limiting the number of requests they can inject into the system and the frequency at which they do. As such, our source-based fairness control ensures fairness decisions are made in tandem in the entire memory system. FST also enforces thread priorities/weights, and enables system software to enforce different fairness objectives and fairness-performance tradeoffs in the memory system.\n Our evaluations show that FST provides the best system fairness and performance compared to four systems with no fairness control and with state-of-the-art fairness mechanisms implemented in both shared caches and memory controllers.", "title": "" } ]
[ { "docid": "a2b07331572f120230bcc2d95bf93fa5", "text": "This paper presents a robust concatenated coding scheme for OFDM with 64 QAM over AWGN channel. At the forward error correction unit, our proposed concatenated coding scheme employs standard form of BCH code as outer code and LDPC code as inner code. Varying from the code rates of BCH codes, we can find the minimum requirement of signal to noise ratio in the proposed concatenated coding scheme. In addition, our proposed scheme can yield better performance than that using BCH (7200, 7032) code in ETSI EN 302 775. Finally, we apply the H.264 source coding via our platform for demonstrations.", "title": "" }, { "docid": "f8adbe748056a503396bb5b17da84f07", "text": "Unsupervised word embeddings provide rich linguistic and conceptual information about words. However, they may provide weak information about domain specific semantic relations for certain tasks such as semantic parsing of natural language queries, where such information about words can be valuable. To encode the prior knowledge about the semantic word relations, we present new method as follows: we extend the neural network based lexical word embedding objective function (Mikolov et al. 2013) by incorporating the information about relationship between entities that we extract from knowledge bases. Our model can jointly learn lexical word representations from free text enriched by the relational word embeddings from relational data (e.g., Freebase) for each type of entity relations. We empirically show on the task of semantic tagging of natural language queries that our enriched embeddings can provide information about not only short-range syntactic dependencies but also long-range semantic dependencies between words. Using the enriched embeddings, we obtain an average of 2% improvement in F-score compared to the previous baselines.", "title": "" }, { "docid": "a5a1dd08d612db28770175cc578dd946", "text": "A novel soft-robotic gripper design is presented, with three soft bending fingers and one passively adaptive palm. Each soft finger comprises two ellipse-profiled pneumatic chambers. Combined with the adaptive palm and the surface patterned feature, the soft gripper could achieve 40-N grasping force in practice, 10 times the self-weight, at a very low actuation pressure below 100 kPa. With novel soft finger design, the gripper could pick up small objects, as well as conform to large convex-shape objects with reliable contact. The fabrication process was presented in detail, involving commercial-grade three-dimensional printing and molding of silicone rubber. The fabricated actuators and gripper were tested on a dedicated platform, showing the gripper could reliably grasp objects of various shapes and sizes, even with external disturbances.", "title": "" }, { "docid": "db87b17e0fd3310fd462c725a5462e6a", "text": "We present Selections, a new cryptographic voting protocol that is end-to-end verifiable and suitable for Internet voting. After a one-time in-person registration, voters can cast ballots in an arbitrary number of elections. We say a system provides over-the-shoulder coercionresistance if a voter can undetectably avoid complying with an adversary that is present during the vote casting process. Our system is the first in the literature to offer this property without the voter having to anticipate coercion and precompute values. Instead, a voter can employ a panic password. We prove that Selections is coercion-resistant against a non-adaptive adversary. 1 Introductory Remarks From a security perspective, the use of electronic voting machines in elections around the world continues to be concerning. In principle, many security issues can be allayed with cryptography. While cryptographic voting has not seen wide deployment, refined systems like Prêt à Voter [11,28] and Scantegrity II [9] are representative of what is theoretically possible, and have even seen some use in governmental elections [7]. Today, a share of the skepticism over electronic elections is being apportioned to Internet voting.1 Many nation-states are considering, piloting or using Internet voting in elections. In addition to the challenges of verifiability and ballot secrecy present in any voting system, Internet voting adds two additional constraints: • Untrusted platforms: voters should be able to reliably cast secret ballots, even when their devices may leak information or do not function correctly. • Unsupervised voting: coercers or vote buyers should not be able to exert undue influence over voters despite the open environment of Internet voting. As with electronic voting, cryptography can assist in addressing these issues. The study of cryptographic Internet voting is not as mature. Most of the literature concentrates on only one of the two problems (see related work in Section 1.2). In this paper, we are concerned with the unsupervised voting problem. Informally, a system that solves it is said to be coercion-resistant. Full version available: http://eprint.iacr.org/2011/166 1 One noted cryptographer, Ronald Rivest, infamously opined that “best practices for Internet voting are like best practices for drunk driving” [25]. G. Danezis (Ed.): FC 2011, LNCS 7035, pp. 47–61, 2012. c © Springer-Verlag Berlin Heidelberg 2012 48 J. Clark and U. Hengartner", "title": "" }, { "docid": "a8fabde6ef54212ea0a8d47727ecd388", "text": "An alternative circuit analysis technique is used to study networks with nonsinusoidal sources and linear loads. In contrast to the technique developed by Steinmetz, this method is supported by geometric algebra instead of the algebra of complex numbers, uses multivectors in place of phasors and is performed in the GN domain instead of the frequency domain. The advantages of this method over the present technique involve: determining the flow of current and power quantities in the circuit, validating the results using the principle of conservation of energy, discerning and revealing other forms of reactive power generation, and the ability to design compensators with great flexibility. The power equation is composed of the active power and the CN -power representing the nonactive power. All the CN-power terms are sorted into reactive power terms due to phase shift, reactive power terms due to harmonic interactions and degrading power terms which determine the new quantity called degrading power. This decomposition shows that estimating these quantities is intricate. It also displays the power equation's functionality for power factor improvement. The geometric addition of power quantities is not pre-established but results from applying the established norm and yields the new quantity called net apparent power.", "title": "" }, { "docid": "06f575b18d1421472a178c555d31987b", "text": "In recent, growth of higher education has increased rapidly. Many new institutions, colleges and universities are being established by both the private and government sectors for the growth of education and welfare of the students. Each institution aims at producing higher and exemplary education rates by employing various teaching and grooming methods. But still there are cases of unemployment that exists among the medium and low risk students. This paper describes the use of data mining techniques to improve the efficiency of academic performance in the educational institutions. Various data mining techniques such as decision tree, association rule, nearest neighbors, neural networks, genetic algorithms, exploratory factor analysis and stepwise regression can be applied to the higher education process, which in turn helps to improve student’s performance. This type of approach gives high confidence to students in their studies. This method helps to identify the students who need special advising or counseling by the teacher which gives high quality of education. Keywords-component; Data Mining; KDD; EDM; Association Rule", "title": "" }, { "docid": "52a5f4c15c1992602b8fe21270582cc6", "text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.", "title": "" }, { "docid": "058a4f93fb5c24c0c9967fca277ee178", "text": "We report on the SUM project which applies automatic summarisation techniques to the legal domain. We describe our methodology whereby sentences from the text are classified according to their rhetorical role in order that particular types of sentence can be extracted to form a summary. We describe some experiments with judgments of the House of Lords: we have performed automatic linguistic annotation of a small sample set and then hand-annotated the sentences in the set in order to explore the relationship between linguistic features and argumentative roles. We use state-of-the-art NLP techniques to perform the linguistic annotation using XML-based tools and a combination of rule-based and statistical methods. We focus here on the predictive capacity of tense and aspect features for a classifier.", "title": "" }, { "docid": "9c717907ec6af9a4edebae84e71ef3f1", "text": "We study a model of fairness in secure computation in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty. We then show how the Bitcoin network can be used to achieve the above notion of fairness in the two-party as well as the multiparty setting (with a dishonest majority). In particular, we propose new ideal functionalities and protocols for fair secure computation and fair lottery in this model. One of our main contributions is the definition of an ideal primitive, which we call F CR (CR stands for “claim-or-refund”), that formalizes and abstracts the exact properties we require from the Bitcoin network to achieve our goals. Naturally, this abstraction allows us to design fair protocols in a hybrid model in which parties have access to the F CR functionality, and is otherwise independent of the Bitcoin ecosystem. We also show an efficient realization of F CR that requires only two Bitcoin transactions to be made on the network. Our constructions also enjoy high efficiency. In a multiparty setting, our protocols only require a constant number of calls to F CR per party on top of a standard multiparty secure computation protocol. Our fair multiparty lottery protocol improves over previous solutions which required a quadratic number of Bitcoin transactions.", "title": "" }, { "docid": "c0546dabfcd377af78ae65a6e0a6a255", "text": "A hard real-time system is usually subject to stringent reliability and timing constraints since failure to produce correct results in a timely manner may lead to a disaster. One way to avoid missing deadlines is to trade the quality of computation results for timeliness, and software fault-tolerance is often achieved with the use of redundant programs. A deadline mechanism which combines these two methods is proposed to provide software faulttolerance in hard real-time periodic task systems. Specifically, we consider the problem of scheduling a set of realtime periodic tasks each of which has two versions:primary and alternate. The primary version contains more functions (thus more complex) and produces good quality results but its correctness is more difficult to verify because of its high level of complexity and resource usage. By contrast, the alternate version contains only the minimum required functions (thus simpler) and produces less precise but acceptable results, and its correctness is easy to verify. We propose a scheduling algorithm which (i) guarantees either the primary or alternate version of each critical task to be completed in time and (ii) attempts to complete as many primaries as possible. Our basic algorithm uses a fixed priority-driven preemptive scheduling scheme to pre-allocate time intervals to the alternates, and at run-time, attempts to execute primaries first. An alternate will be executed only (1) if its primary fails due to lack of time or manifestation of bugs, or (2) when the latest time to start execution of the alternate without missing the corresponding task deadline is reached. This algorithm is shown to be effective and easy to implement. This algorithm is enhanced further to prevent early failures in executing primaries from triggering failures in the subsequent job executions, thus improving efficiency of processor usage.", "title": "" }, { "docid": "60fe0b363310d7407a705e3c1037aa15", "text": "AIMS\nThe aim was to investigate the biosorption of chromium, nickel and iron from metallurgical effluents, produced by a steel foundry, using a strain of Aspergillus terreus immobilized in polyurethane foam.\n\n\nMETHODS AND RESULTS\nA. terreus UFMG-F01 was immobilized in polyurethane foam and subjected to biosorption tests with metallurgical effluents. Maximal metal uptake values of 164.5 mg g(-1) iron, 96.5 mg g(-1) chromium and 19.6 mg g(-1) nickel were attained in a culture medium containing 100% of effluent stream supplemented with 1% of glucose, after 6 d of incubation.\n\n\nCONCLUSIONS\nMicrobial populations in metal-polluted environments include fungi that have adapted to otherwise toxic concentrations of heavy metals and have become metal resistant. In this work, a strain of A. terreus was successfully used as a metal biosorbent for the treatment of metallurgical effluents.\n\n\nSIGNIFICANCE AND IMPACT OF THE STUDY\nA. terreus UFMG-F01 was shown to have good biosorption properties with respect to heavy metals. The low cost and simplicity of this technique make its use ideal for the treatment of effluents from steel foundries.", "title": "" }, { "docid": "717009da92a43c298afcb48f2ccfc879", "text": "It is known that the learning rate is the most important hyper-parameter to tune for training deep convolutional neural networks (i.e., a “guessing game”). This report describes a new method for setting the learning rate, named cyclical learning rates, that eliminates the need to experimentally find the best values and schedule for the learning rates. Instead of setting the learning rate to fixed values, this method lets the learning rate cyclically vary within reasonable boundary values. This report shows that training with cyclical learning rates achieves near optimal classification accuracy without tuning and often in many fewer iterations. This report also describes a simple way to estimate “reasonable bounds” by linearly increasing the learning rate in one training run of the network for only a few epochs. In addition, cyclical learning rates are demonstrated on training with the CIFAR-10 dataset and the AlexNet and GoogLeNet architectures on the ImageNet dataset. These methods are practical tools for everyone who trains convolutional neural networks.", "title": "" }, { "docid": "b04ae75e4f444b97976962a397ac413c", "text": "In this paper the new topology DC/DC Boost power converter-inverter-DC motor that allows bidirectional rotation of the motor shaft is presented. In this direction, the system mathematical model is developed considering its different operation modes. Afterwards, the model validation is performed via numerical simulations by using Matlab-Simulink.", "title": "" }, { "docid": "3bf10fc76e14b73fa04301f71bb6efa3", "text": "Illustrative parallel coordinates (IPC) is a suite of artistic rendering techniques for augmenting and improving parallel coordinate (PC) visualizations. IPC techniques can be used to convey a large amount of information about a multidimensional dataset in a small area of the screen through the following approaches: (a) edge-bundling through splines; (b) visualization of “branched ” clusters to reveal the distribution of the data; (c) opacity-based hints to show cluster density; (d) opacity and shading effects to illustrate local line density on the parallel axes; and (e) silhouettes, shadows and halos to help the eye distinguish between overlapping clusters. Thus, the primary goal of this work is to convey as much information as possible in a manner that is aesthetically pleasing and easy to understand for non-experts.", "title": "" }, { "docid": "37997245b1a6d10148819e56d978ba04", "text": "Aspect-based sentiment analysis summarizes what people like and dislike from reviews of products or services. In this paper, we adapt the first rank research at SemEval 2016 to improve the performance of aspect-based sentiment analysis for Indonesian restaurant reviews. We use six steps for aspect-based sentiment analysis i.e.: preprocess the reviews, aspect extraction, aspect categorization, sentiment classification, opinion structure generation, and rating calculation. We collect 992 sentences for experiment and 383 sentences for evaluation. We conduct experiment to find best feature combination for aspect extraction, aspect categorization, and sentiment classification. The aspect extraction, aspect categorization, and sentiment classification have F1-measure value of 0.793, 0.823, and 0.642 respectively.", "title": "" }, { "docid": "0cfa40d89a1d169d334067172167d750", "text": "Recent advances in RST discourse parsing have focused on two modeling paradigms: (a) high order parsers which jointly predict the tree structure of the discourse and the relations it encodes; or (b) lineartime parsers which are efficient but mostly based on local features. In this work, we propose a linear-time parser with a novel way of representing discourse constituents based on neural networks which takes into account global contextual information and is able to capture long-distance dependencies. Experimental results show that our parser obtains state-of-the art performance on benchmark datasets, while being efficient (with time complexity linear in the number of sentences in the document) and requiring minimal feature engineering.", "title": "" }, { "docid": "655f855531360c035f0dc59f70299302", "text": "Introduction 1Motivation, diagnosis of features inside CNNs: In recent years, real applications usually propose new demands for deep learning beyond the accuracy. The CNN needs to earn trust from people for safety issues, because a high accuracy on testing images cannot always ensure that the CNN encodes correct features. Instead, the CNN sometimes uses unreliable reasons for prediction. Therefore, this study aim to provide a generic tool to examine middle-layer features of a CNN to ensure the safety in critical applications. Unlike previous visualization (Zeiler and Fergus 2014) and diagnosis (Bau et al. 2017; Ribeiro, Singh, and Guestrin 2016) of CNN representations, we focus on the following two new issues, which are of special values in feature diagnosis. • Disentanglement of interpretable and uninterpretable feature information is necessary for a rigorous and trustworthy examination of CNN features. Each filter of a conv-layer usually encodes a mixture of various semantics and noises (see Fig. 1). As discussed in (Bau et al. 2017), filters in high conv-layers mainly represent “object parts”2, and “material” and “color” information in high layers is not salient enough for trustworthy analysis. In particular, part features are usually more localized and thus is more helpful in feature diagnosis. Therefore, in this paper, we propose to disentangle part features from another signals and noises. For example, we may quantitatively disentangle 90% information of CNN features as object parts and interpret the rest 10% as textures and noises. • Semantic explanations: Given an input image, we aim to use clear visual concepts (here, object parts) to interpret chaotic CNN features. In comparisons, network visualization and diagnosis mainly illustrate the appearance corresponding to a network output/filter, without physically", "title": "" }, { "docid": "b3898262f167c63ba2dbb3aacc259d5f", "text": "We propose two model-free visual object trackers for following targets using the low-cost quadrotor Parrot AR.Drone 2.0 at low altitudes. Our trackers employ correlation filters for short-term tracking and a redetection strategy based on tracking-learning-detection (TLD). We performed an extensive quantitative evaluation of our trackers and a wide variety of existing trackers on person pursuit sequences. The results show that our trackers outperform the existing trackers. In addition, we demonstrate the applicability of our proposed trackers in a series of flight experiments in unconstrained environments using human targets and an existing visual servoing controller.", "title": "" }, { "docid": "ecd8f70442aa40cd2088f4324fe0d247", "text": "Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms to scale to high dimensions. However, a central question remains: How to specify an expressive variational distribution that maintains efficient computation? To address this, we develop hierarchical variational models (HVMs). HVMs augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables. The algorithm we develop is black box, can be used for any HVM, and has the same computational efficiency as the original approximation. We study HVMs on a variety of deep discrete latent variable models. HVMs generalize other expressive variational distributions and maintains higher fidelity to the posterior.", "title": "" }, { "docid": "647e3aa7df6379ead9929decb58e0c3d", "text": "We present a fast inverse-graphics framework for instance-level 3D scene understanding. We train a deep convolutional network that learns to map image regions to the full 3D shape and pose of all object instances in the image. Our method produces a compact 3D representation of the scene, which can be readily used for applications like autonomous driving. Many traditional 2D vision outputs, like instance segmentations and depth-maps, can be obtained by simply rendering our output 3D scene model. We exploit class-specific shape priors by learning a low dimensional shape-space from collections of CAD models. We present novel representations of shape and pose, that strive towards better 3D equivariance and generalization. In order to exploit rich supervisory signals in the form of 2D annotations like segmentation, we propose a differentiable Render-and-Compare loss that allows 3D shape and pose to be learned with 2D supervision. We evaluate our method on the challenging real-world datasets of Pascal3D+ and KITTI, where we achieve state-of-the-art results.", "title": "" } ]
scidocsrr
fc7aa1c6a748bce4fff5428d22c6d79c
An improved NSGA-III procedure for evolutionary many-objective optimization
[ { "docid": "f0c149dd3cb05b694c1eae9986d465f4", "text": "Decomposition is a basic strategy in traditional multiobjective optimization. However, it has not yet been widely used in multiobjective evolutionary optimization. This paper proposes a multiobjective evolutionary algorithm based on decomposition (MOEA/D). It decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Each subproblem is optimized by only using information from its several neighboring subproblems, which makes MOEA/D have lower computational complexity at each generation than MOGLS and nondominated sorting genetic algorithm II (NSGA-II). Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjective optimization problems. It has been shown that MOEA/D using objective normalization can deal with disparately-scaled objectives, and MOEA/D with an advanced decomposition method can generate a set of very evenly distributed solutions for 3-objective test instances. The ability of MOEA/D with small population, the scalability and sensitivity of MOEA/D have also been experimentally investigated in this paper.", "title": "" } ]
[ { "docid": "6c33b0ab7860b0691b46637eec31c4eb", "text": "Fascia iliaca block or femoral nerve block is used frequently in hip fracture patients because of their opioid-sparing effects and reduction in opioid-related adverse effects. A recent anatomical study on hip innervation led to the identification of relevant landmarks to target the hip articular branches of femoral nerve and accessory obturator nerve. Using this information, we developed a novel ultrasound-guided approach for blockade of these articular branches to the hip, the PENG (PEricapsular Nerve Group) block. In this report, we describe the technique and its application in 5 consecutive patients.", "title": "" }, { "docid": "1f333e1dbeec98d3733dd78dfd669933", "text": "Background and objectives: Food poisoning has been always a major concern in health system of every community and cream-filled products are one of the most widespread food poisoning causes in humans. In present study, we examined the preservative effect of the cinnamon oil in cream-filled cakes. Methods: Antimicrobial activity of Cinnamomum verum J. Presl (Cinnamon) bark essential oil was examined against five food-borne pathogens (Staphylococcus aureus, Escherichia coli, Candida albicans, Bacillus cereus and Salmonella typhimurium) to investigate its potential for use as a natural preservative in cream-filled baked goods. Chemical constituents of the oil were determined by gas chromatography/mass spectrometry. For evaluation of preservative sufficiency of the oil, pathogens were added to cream-filled cakes manually and 1 μL/mL of the essential oil was added to all samples except the blank. Results: Chemical constituents of the oil were determined by gas chromatography/mass spectrometry and twenty five components were identified where cinnamaldehyde (79.73%), linalool (4.08%), cinnamaldehyde para-methoxy (2.66%), eugenol (2.37%) and trans-caryophyllene (2.05%) were the major constituents. Cinnamon essential oil showed strong antimicrobial activity against selected pathogens in vitro and the minimum inhibitory concentration values against all tested microorganisms were determined as 0.5 μL/disc except for S. aureus for which, the oil was not effective in tested concentrations. After baking, no observable microorganism was observed in all susceptible microorganisms count in 72h stored samples. Conclusion: It was concluded that by analysing the sensory quality of the preserved food, cinnamon oil may be considered as a natural preservative in food industry, especially for cream-filled cakes and", "title": "" }, { "docid": "8d5c0786f7fdf2b08169cbd93daea134", "text": "This paper focuses on kinematic analysis and evaluation of wheelchair mounted robotic arms (WMRA). It addresses the kinematics of the WMRA with respect to its ability to reach common positions while performing activities of daily living (ADL). A procedure is developed for the kinematic analysis and evaluation of a WMRA. In an effort to evaluate two commercial WMRAs, the procedure for kinematic analysis is applied to each manipulator. Design recommendations and insights with regard to each device are obtained and used to design a new WMRA to overcome the limitations of these devices. This method benefits the researchers by providing a standardized procedure for kinematic analysis of WMRAs that is capable of evaluating independent designs.", "title": "" }, { "docid": "97bcae9e2ca08038a82c9c46b717cd4f", "text": "The Internet of Things (IoT) networks are vulnerable to various kinds of attacks, being the sinkhole attack one of the most destructive since it prevents communication among network devices. In general, existing solutions are not effective to provide protection and security against attacks sinkhole on IoT, and they also introduce high consumption of resources de memory, storage and processing. Further, they do not consider the impact of device mobility, which in essential in urban scenarios, like smart cities. This paper proposes an intrusion detection system, called INTI (Intrusion detection of SiNkhole attacks on 6LoWPAN for InterneT of ThIngs), to identify sinkhole attacks on the routing services in IoT. Moreover, INTI aims to mitigate adverse effects found in IDS that disturb its performance, like false positive and negative, as well as the high resource cost. The system combines watchdog, reputation and trust strategies for detection of attackers by analyzing the behavior of devices. Results show the INTI performance and its effectiveness in terms of attack detection rate, number of false positives and false negatives.", "title": "" }, { "docid": "bfb79421ca0ddfd5a584f009f8102a2c", "text": "In this paper, suppression of cross-polarized (XP) radiation of a circular microstrip patch antenna (CMPA) employing two new geometries of defected ground structures (DGSs), is experimentally investigated. One of the antennas employs a circular ring shaped defect in the ground plane, located bit away from the edge of the patch. This structure provides an improvement of XP level by 5 to 7 dB compared to an identical patch with normal ground plane. The second structure incorporates two arc-shaped DGSs in the H-plane of the patch. This configuration improves the XP radiation by about 7 to 12 dB over and above a normal CMPA. For demonstration of the concept, a set of prototypes have been examined at C-band. The experimental results have been presented.", "title": "" }, { "docid": "7644e24f667b221d2e5f47d71ce4e408", "text": "Considerable adverse side effects and cytotoxicity of highly potent drugs for healthy tissue require the development of novel drug delivery systems to improve pharmacokinetics and result in selective distribution of the loaded agent. The introduction of targeted liposomal formulations has provided potential solutions for improved drug delivery to cancer cells, penetrating delivery through blood-brain barrier and gene therapy. A large number of investigations have been developed over the past few decades to overcome pharmacokinetics and unfavorable side effects limitations. These improvements have enabled targeted liposome to meet criteria for successful and improved potent drug targeting. Promising in vitro and in vivo results for liposomal-directed delivery systems appear to be effective for vast variety of highly potent therapeutics. This review will focus on the past decade's potential use and study of highly potent drugs using targeted liposomes.", "title": "" }, { "docid": "2c251c8f1fcf15510a5c82de33daced3", "text": "BACKGROUND\nOverall diet quality measurements have been suggested as a useful tool to assess diet-disease relationships. Oxidative stress has been related to the development of obesity and other chronic diseases. Furthermore, antioxidant intake is being considered as protective against cell oxidative damage and related metabolic complications.\n\n\nOBJECTIVE\nTo evaluate potential associations between the dietary total antioxidant capacity of foods (TAC), the energy density of the diet, and other relevant nutritional quality indexes in healthy young adults.\n\n\nMETHODS\nSeveral anthropometric variables from 153 healthy participants (20.8 +/- 2.7 years) included in this study were measured. Dietary intake was assessed by a validated food-frequency questionnaire, which was also used to calculate the dietary TAC and for daily energy intake adjustment.\n\n\nRESULTS\nPositive significant associations were found between dietary TAC and Mediterranean energy density hypothesis-oriented dietary scores (Mediterranean Diet Score, Alternate Mediterranean Diet Score, Modified Mediterranean Diet Score), non-Mediterranean hypothesis-oriented dietary scores (Healthy Eating Index, Alternate Healthy Eating Index, Diet Quality Index-International, Diet Quality Index-Revised), and diversity of food intake indicators (Recommended Food Score, Quantitative Index for Dietary Diversity in terms of total energy intake). The Mediterranean Diet Quality Index and Diet Quality Index scores (a Mediterranean and a non-Mediterranean hypothesis-oriented dietary score, respectively), whose lower values refer to a higher diet quality, decreased with higher values of dietary TAC. Energy density was also inversely associated with dietary TAC.\n\n\nCONCLUSION\nThese data suggest that dietary TAC, as a measure of antioxidant intake, may also be a potential marker of diet quality in healthy subjects, providing a novel approach to assess the role of antioxidant intake on health promotion and diet-based therapies.", "title": "" }, { "docid": "5b134fae94a5cc3a2e1b7cc19c5d29e5", "text": "We explore making virtual desktops behave in a more physically realistic manner by adding physics simulation and using piling instead of filing as the fundamental organizational structure. Objects can be casually dragged and tossed around, influenced by physical characteristics such as friction and mass, much like we would manipulate lightweight objects in the real world. We present a prototype, called BumpTop, that coherently integrates a variety of interaction and visualization techniques optimized for pen input we have developed to support this new style of desktop organization.", "title": "" }, { "docid": "2e9d5a0f975a42e79a5c7625fc246502", "text": "e-Tourism is a tourist recommendation and planning application to assist users on the organization of a leisure and tourist agenda. First, a recommender system offers the user a list of the city places that are likely of interest to the user. This list takes into account the user demographic classification, the user likes in former trips and the preferences for the current visit. Second, a planning module schedules the list of recommended places according to their temporal characteristics as well as the user restrictions; that is the planning system determines how and when to perform the recommended activities. This is a very relevant feature that most recommender systems lack as it allows the user to have the list of recommended activities organized as an agenda, i.e. to have a totally executable plan.", "title": "" }, { "docid": "d9287f2a9cd9681d104beb326a06792d", "text": "Convolutional neural networks have been extremely successful in the image recognition domain because they ensure equivariance to translations. There have been many recent attempts to generalize this framework to other domains, including graphs and data lying on manifolds. In this paper we give a rigorous, theoretical treatment of convolution and equivariance in neural networks with respect to not just translations, but the action of any compact group. Our main result is to prove that (given some natural constraints) convolutional structure is not just a sufficient, but also a necessary condition for equivariance to the action of a compact group. Our exposition makes use of concepts from representation theory and noncommutative harmonic analysis and derives new generalized convolution formulae.", "title": "" }, { "docid": "a431c8c717fd4452a9654e59c6974031", "text": "While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/ .", "title": "" }, { "docid": "c070020d88fb77f768efa5f5ac2eb343", "text": "This paper provides a critical overview of the theoretical, analytical, and practical questions most prevalent in the study of the structural and the sociolinguistic dimensions of code-switching (CS). In doing so, it reviews a range of empirical studies from around the world. The paper first looks at the linguistic research on the structural features of CS focusing in particular on the code-switching versus borrowing distinction, and the syntactic constraints governing its operation. It then critically reviews sociological, anthropological, and linguistic perspectives dominating the sociolinguistic research on CS over the past three decades. Major empirical studies on the discourse functions of CS are discussed, noting the similarities and differences between socially motivated CS and style-shifting. Finally, directions for future research on CS are discussed, giving particular emphasis to the methodological issue of its applicability to the analysis of bilingual classroom interaction.", "title": "" }, { "docid": "c76d8583d805b61a8210c4e5f8854c80", "text": "BACKGROUND AND OBJECTIVES\nThe present study proposes an intelligent system for automatic categorization of Pap smear images to detect cervical dysplasia, which has been an open problem ongoing for last five decades.\n\n\nMETHODS\nThe classification technique is based on shape, texture and color features. It classifies the cervical dysplasia into two-level (normal and abnormal) and three-level (Negative for Intraepithelial Lesion or Malignancy, Low-grade Squamous Intraepithelial Lesion and High-grade Squamous Intraepithelial Lesion) classes reflecting the established Bethesda system of classification used for diagnosis of cancerous or precancerous lesion of cervix. The system is evaluated on two generated databases obtained from two diagnostic centers, one containing 1610 single cervical cells and the other 1320 complete smear level images. The main objective of this database generation is to categorize the images according to the Bethesda system of classification both of which require lots of training and expertise. The system is also trained and tested on the benchmark Herlev University database which is publicly available. In this contribution a new segmentation technique has also been proposed for extracting shape features. Ripplet Type I transform, Histogram first order statistics and Gray Level Co-occurrence Matrix have been used for color and texture features respectively. To improve classification results, ensemble method is used, which integrates the decision of three classifiers. Assessments are performed using 5 fold cross validation.\n\n\nRESULTS\nExtended experiments reveal that the proposed system can successfully classify Pap smear images performing significantly better when compared with other existing methods.\n\n\nCONCLUSION\nThis type of automated cancer classifier will be of particular help in early detection of cancer.", "title": "" }, { "docid": "54d61b3720be1a6a4aa236a51af72e0d", "text": "In 2008 Bitcoin was introduced as the first decentralized electronic cash system and it has seen widespread adoption since it became fully functional in 2009. This thesis describe the Bitcoin system, anonymity aspects for Bitcoin and how we can use cryptography to improve anonymity by a scheme called Zerocoin. The Bitcoin system will be described with focus on transactions and the blockchain where all transactions are recorded. We look more closely into anonymity in terms of address unlinkability and illustrate how the anonymity provided is insufficient by clustering addresses. Further we describe Zerocoin, a decentralized electronic cash scheme designed to cryptographically improve the anonymity guarantees in Bitcoin by breaking the link between individual Bitcoin transactions. We detail the construction of Zerocoin, provide security analysis and describe how it integrates into Bitcoin.", "title": "" }, { "docid": "bf6a5ff65a60da049c6024375e2effb6", "text": "This document updates RFC 4944, \"Transmission of IPv6 Packets over IEEE 802.15.4 Networks\". This document specifies an IPv6 header compression format for IPv6 packet delivery in Low Power Wireless Personal Area Networks (6LoWPANs). The compression format relies on shared context to allow compression of arbitrary prefixes. How the information is maintained in that shared context is out of scope. This document specifies compression of multicast addresses and a framework for compressing next headers. UDP header compression is specified within this framework. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.", "title": "" }, { "docid": "e060548f90eb06f359b2d8cfcf713c29", "text": "Objective\nTo conduct a systematic review of deep learning models for electronic health record (EHR) data, and illustrate various deep learning architectures for analyzing different data sources and their target applications. We also highlight ongoing research and identify open challenges in building deep learning models of EHRs.\n\n\nDesign/method\nWe searched PubMed and Google Scholar for papers on deep learning studies using EHR data published between January 1, 2010, and January 31, 2018. We summarize them according to these axes: types of analytics tasks, types of deep learning model architectures, special challenges arising from health data and tasks and their potential solutions, as well as evaluation strategies.\n\n\nResults\nWe surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, concept embedding, data augmentation, and EHR data privacy. We then studied how deep architectures were applied to these tasks. We also discussed some special challenges arising from modeling EHR data and reviewed a few popular approaches. Finally, we summarized how performance evaluations were conducted for each task.\n\n\nDiscussion\nDespite the early success in using deep learning for health analytics applications, there still exist a number of issues to be addressed. We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment.", "title": "" }, { "docid": "110f4c0cd7f0aa099dbadfa68ffcd385", "text": "In general, neural networks are not currently capable of learning tasks in a sequential fashion. When a novel, unrelated task is learnt by a neural network, it substantially forgets how to solve previously learnt tasks. One of the original solutions to this problem is pseudo-rehearsal, which involves learning the new task while rehearsing generated items representative of the previous task/s. This is very effective for simple tasks. However, pseudo-rehearsal has not yet been successfully applied to very complex tasks because in these tasks it is difficult to generate representative items. We accomplish pseudo-rehearsal by using a Generative Adversarial Network to generate items so that our deep network can learn to sequentially classify the CIFAR-10, SVHN and MNIST datasets. After training on all tasks, our network loses only 1.67% absolute accuracy on CIFAR-10 and gains 0.24% absolute accuracy on SVHN. Our model’s performance is a substantial improvement compared to the current state of the art solution.", "title": "" }, { "docid": "7440fce3a55281f183552c0398de7a0a", "text": "Linear logic is well known for its resource-awareness, which has inspired the design of several resource management mechanisms in programming language design. Its resource-awareness arises from the distinction between linear, single-use data and non-linear, reusable data. The latter is marked by the so-called exponential modality, which, from the categorical viewpoint, is a (monoidal) comonad. Monadic notions of computation are well-established mechanisms used to express effects in pure functional languages. Less well-established is the notion of comonadic computation. However, recent works have shown the usefulness of comonads to structure context dependent computations. In this work, we present a language `RPCF inspired by a generalized interpretation of the exponential modality. In `RPCF the exponential modality carries a label—an element of a semiring R—that provides additional information on how a program uses its context. This additional structure is used to express comonadic type analysis.", "title": "" }, { "docid": "7895810c92a80b6d5fd8b902241d66c9", "text": "This paper discusses a high-voltage pulse generator for producing corona plasma. The generator consists of three resonant charging circuits, a transmission line transformer, and a triggered spark-gap switch. Voltage pulses in the order of 30–100 kV with a rise time of 10–20 ns, a pulse duration of 100–200 ns, a pulse repetition rate of 1–900 pps, an energy per pulse of 0.5–12 J, and the average power of up to 10 kW have been achieved with total energy conversion efficiency of 80%–90%. Moreover, the system has been used in four industrial demonstrations on volatile organic compounds removal, odor emission control, and biogas conditioning.", "title": "" }, { "docid": "247eebd69a651f6f116f41fdf885ae39", "text": "RFID identification is a new technology that will become ubiquitous as RFID tags will be applied to every-day items in order to yield great productivity gains or “smart” applications for users. However, this pervasive use of RFID tags opens up the possibility for various attacks violating user privacy. In this work we present an RFID authentication protocol that enforces user privacy and protects against tag cloning. We designed our protocol with both tag-to-reader and reader-to-tag authentication in mind; unless both types of authentication are applied, any protocol can be shown to be prone to either cloning or privacy attacks. Our scheme is based on the use of a secret shared between tag and database that is refreshed to avoid tag tracing. However, this is done in such a way so that efficiency of identification is not sacrificed. Additionally, our protocol is very simple and it can be implemented easily with the use of standard cryptographic hash functions. In analyzing our protocol, we identify several attacks that can be applied to RFID protocols and we demonstrate the security of our scheme. Furthermore, we show how forward privacy is guaranteed; messages seen today will still be valid in the future, even after the tag has been compromised.", "title": "" } ]
scidocsrr
6f9ec032c1f6738a0b2ddbafed395734
Computer Vision-based Bangladeshi Sign Language Recognition System
[ { "docid": "f267b329f52628d3c52a8f618485ae95", "text": "We present an approach to continuous American Sign Language (ASL) recognition, which uses as input three-dimensional data of arm motions. We use computer vision methods for three-dimensional object shape and motion parameter extraction and an Ascension Technologies Flock of Birds interchangeably to obtain accurate three-dimensional movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for Hidden Markov Models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results.", "title": "" } ]
[ { "docid": "bdf8d4a8862aad3631f5def11b13b101", "text": "We examine the relationship between children's kindergarten attention skills and developmental patterns of classroom engagement throughout elementary school in disadvantaged urban neighbourhoods. Kindergarten measures include teacher ratings of classroom behavior, direct assessments of number knowledge and receptive vocabulary, and parent-reported family characteristics. From grades 1 through 6, teachers also rated children's classroom engagement. Semi-parametric mixture modeling generated three distinct trajectories of classroom engagement (n = 1369, 50% boys). Higher levels of kindergarten attention were proportionately associated with greater chances of belonging to better classroom engagement trajectories compared to the lowest classroom engagement trajectory. In fact, improvements in kindergarten attention reliably increased the likelihood of belonging to more productive classroom engagement trajectories throughout elementary school, above and beyond confounding child and family factors. Measuring the development of classroom productivity is pertinent because such dispositions represent precursors to mental health, task-orientation, and persistence in high school and workplace behavior in adulthood.", "title": "" }, { "docid": "4af2a221f182ede31a6a620f0441eba3", "text": "In this article, we present a semi-Lagrangian surface tracking method for use with fluid simulations. Our method maintains an explicit polygonal mesh that defines the surface, and an octree data structure that provides both a spatial index for the mesh and a means for efficiently approximating the signed distance to the surface. At each timestep, a new surface is constructed by extracting the zero set of an advected signed-distance function. Semi-Lagrangian backward path tracing is used to advect the signed-distance function. One of the primary advantages of this formulation is that it enables tracking of surface characteristics, such as color or texture coordinates, at negligible additional cost. We include several examples demonstrating that the method can be effectively used as part of a fluid simulation to animate complex and interesting fluid behaviors.", "title": "" }, { "docid": "23ae82051298ad7111cd5ebabcd5e075", "text": "There is considerable interest in revisiting LNT theory as the basis for the system of radiation protection in the US and worldwide. Arguing the scientific merits of policy options is not likely to be fruitful because the science is not robust enough to support one theory to the exclusion of others. Current science cannot determine the existence of a dose threshold, a key piece to resolving the matter scientifically. The nature of the scientific evidence is such that risk assessment at small effective doses (defined as <100 mSv) is highly uncertain, and several policy alternatives, including threshold and non-linear dose-response functions, are scientifically defensible. This paper argues for an alternative approach by looking at the LNT debate as a policy question and analyzes the problem from a social and economic perspective. In other words, risk assessment and a strictly scientific perspective are insufficiently broad enough to resolve the issue completely. A wider perspective encompassing social and economic impacts in a risk management context is necessary, but moving the debate to the policy and risk management arena necessarily marginalizes the role of scientists.", "title": "" }, { "docid": "a6def37312896cf470360b2c2282af69", "text": "The use of herbal medicinal products and supplements has increased during last decades. At present, some herbs are used to enhance muscle strength and body mass. Emergent evidence suggests that the health benefits from plants are attributed to their bioactive compounds such as Polyphenols, Terpenoids, and Alkaloids which have several physiological effects on the human body. At times, manufacturers launch numerous products with banned ingredient inside with inappropriate amounts or fake supplement inducing harmful side effect. Unfortunately up to date, there is no guarantee that herbal supplements are safe for anyone to use and it has not helped to clear the confusion surrounding the herbal use in sport field especially. Hence, the purpose of this review is to provide guidance on the efficacy and side effect of most used plants in sport. We have identified plants according to the following categories: Ginseng, alkaloids, and other purported herbal ergogenics such as Tribulus Terrestris, Cordyceps Sinensis. We found that most herbal supplement effects are likely due to activation of the central nervous system via stimulation of catecholamines. Ginseng was used as an endurance performance enhancer, while alkaloids supplementation resulted in improvements in sprint and cycling intense exercises. Despite it is prohibited, small amount of ephedrine was usually used in combination with caffeine to enhance muscle strength in trained individuals. Some other alkaloids such as green tea extracts have been used to improve body mass and composition in athletes. Other herb (i.e. Rhodiola, Astragalus) help relieve muscle and joint pain, but results about their effects on exercise performance are missing.", "title": "" }, { "docid": "aaf69cb42fc9d17cf0ae3b80a55f12d6", "text": "Bringing Blockchain technology and business process management together, we follow the Design Science Research approach and design, implement, and evaluate a Blockchain prototype for crossorganizational workflow management together with a German bank. For the use case of a documentary letter of credit we describe the status quo of the process, identify areas of improvement, implement a Blockchain solution, and compare both workflows. The prototype illustrates that the process, as of today paper-based and with high manual effort, can be significantly improved. Our research reveals that a tamper-proof process history for improved auditability, automation of manual process steps and the decentralized nature of the system can be major advantages of a Blockchain solution for crossorganizational workflow management. Further, our research provides insights how Blockchain technology can be used for business process management in general.", "title": "" }, { "docid": "babaff1827bda11fab8d80d631380f1f", "text": "This paper presents a novel bidirectional current-fed dual inductor push-pull DC-DC converter with galvanic isolation. The converter features active voltage doubler rectifier, which is controlled by the switching sequence synchronous to that of the input-side switches. The control algorithm proposed enables full-soft-switching of all switches in a wide range of the input voltage and power without requirement of snubbers or resonant switching to be employed. Operation principle for the energy transfer in the both directions is described. Experimental results as well as basic design guidelines are presented.", "title": "" }, { "docid": "42a3a06fc13f03dd6be70284bc65b5c6", "text": "Superpixel segmentation has become a popular preprocessing step in computer vision with a great variety of existing algorithms. Almost all algorithms claim to compute compact superpixels, but no one showed how to measure compactness and no one investigated the implications. In this paper, we propose a novel metric to measure superpixel compactness. With this metric, we show that there is a trade-off between compactness and boundary recall. In addition, we propose an algorithm that allows to precisely control this trade-off and that outperforms the current state-of-the-art. As a demonstration, we show the importance of considering compactness with the help of an example application.", "title": "" }, { "docid": "7862cd37ea07523f0ae7eb870ce95291", "text": "Producing good low-dimensional representations of high-dimensional data is a common and important task in many data mining applications. Two methods that have been particularly useful in this regard are multidimensional scaling and nonlinear mapping. These methods attempt to visualize a set of objects described by means of a dissimilarity or distance matrix on a low-dimensional display plane in a way that preserves the proximities of the objects to whatever extent is possible. Unfortunately, most known algorithms are of quadratic order, and their use has been limited to relatively small data sets. We recently demonstrated that nonlinear maps derived from a small random sample of a large data set exhibit the same structure and characteristics as that of the entire collection, and that this structure can be easily extracted by a neural network, making possible the scaling of data set orders of magnitude larger than those accessible with conventional methodologies. Here, we present a variant of this algorithm based on local learning. The method employs a fuzzy clustering methodology to partition the data space into a set of Voronoi polyhedra, and uses a separate neural network to perform the nonlinear mapping within each cell. We find that this local approach offers a number of advantages, and produces maps that are virtually indistinguishable from those derived with conventional algorithms. These advantages are discussed using examples from the fields of combinatorial chemistry and optical character recognition. c © 2001 John Wiley & Sons, Inc. J Comput Chem 22: 373–386, 2001", "title": "" }, { "docid": "7d9462f990891099380038adfb325924", "text": "In privacy preserving data mining, anonymization based approaches have been used to preserve the privacy of an individual. Existing literature addresses various anonymization based approaches for preserving the sensitive private information of an individual. The k-anonymity model is one of the widely used anonymization based approach. However, the anonymization based approaches suffer from the issue of information loss. To minimize the information loss various state-of-the-art anonymization based clustering approaches viz. Greedy k-member algorithm and Systematic clustering algorithm have been proposed. Among them, the Systematic clustering algorithm gives lesser information loss. In addition, these approaches make use of all attributes during the creation of an anonymized database. Therefore, the risk of disclosure of sensitive private data is higher via publication of all the attributes. In this paper, we propose two approaches for minimizing the disclosure risk and preserving the privacy by using systematic clustering algorithm. First approach creates an unequal combination of quasi-identifier and sensitive attribute. Second approach creates an equal combination of quasi-identifier and sensitive attribute. We also evaluate our approach empirically focusing on the information loss and execution time as vital metrics. We illustrate the effectiveness of the proposed approaches by comparing them with the existing clustering algorithms.", "title": "" }, { "docid": "900e2d0589c026f90ad1f89f1740a6c2", "text": "We review recent progress in the measurement and understanding of the electrical properties of individual metal and semiconducting single-wall carbon nanotubes. The fundamental scattering mechanisms governing the electrical transport in nanotubes are discussed, along with the properties of p–n and Schottkybarrier junctions in semiconductor tubes. The use of advanced nanotube devices for electronic, high-frequency, and electromechanical applications is discussed. We then examine quantum transport in carbon nanotubes, including the observation of quantized conductance, proximity-induced supercurrents, and spin-dependent ballistic transport. We move on to explore the properties of single and coupled carbon-nanotube quantum dots. Spin and orbital (isospin) magnetic moments lead to fourfold shell structure and unusual Kondo phenomena. We conclude with a discussion of unanswered questions and a look to future research directions.", "title": "" }, { "docid": "7e5c3e774572e59180637da0d3b2d71a", "text": "Relationship marketing—establishing, developing, and maintaining successful relational exchanges—constitutes a major shift in marketing theory and practice. After conceptualizing relationship marketing and discussing its ten forms, the authors (1) theorize that successful relationship marketing requires relationship commitment and tnjst, (2) model relationship commitment and trust as key mediating variables, (3) test this key mediating variable model using data from automobile tire retailers, and (4) compare their model with a rival that does not allow relationship commitment and trust to function as mediating variables. Given the favorable test results for the key mediating variable model, suggestions for further explicating and testing it are offered.", "title": "" }, { "docid": "702349e4c5652fe6750425cd586b47f7", "text": "What should constitute knowledge bases that we expect our future teachers to gain related to pedagogically sound technology integration? Employing the Shulman’s teacher knowledge base as a theoretical lens, this study examined the complexity of pre-service teachers’ technological pedagogical content knowledge (TPCK) in the context of integrating problem based learning (PBL) and information and communications technology (ICT). Ninety-seven pre-service teachers in this study engaged in a collaborative lesson design project where they applied pedagogical knowledge about PBL to design a technology integrated lesson in their subject area of teaching. Data were collected from two sources: survey and lesson design artifacts. Data analyses revealed that while participants had theoretical understandings of pedagogical knowledge about PBL, their lesson designs showed a mismatch among technology tools, content representations, and pedagogical strategies, indicating conflicts in translating pedagogical content knowledge into designing pedagogically sound, technology integrated lessons. The areas that students perceived to be particularly challenging and difficult include: a) generating authentic and ill-structured problems for a chosen content topic, b) finding and integrating ICT tools and resources relevant for the target students and learning activities, and c) designing tasks with a balance between teacher guidance and student independence. The present study suggests the potential of two explanations for such difficulties: lack of intimate connection among beliefs, knowledge, and actions, and insufficient repertoires for teaching with technology for problem based learning.", "title": "" }, { "docid": "4229e2db880628ea2f0922a94c30efe0", "text": "Since the end of the 20th century, it has become clear that web browsers will play a crucial role in accessing Internet resources such as the World Wide Web. They evolved into complex software suites that are able to process a multitude of data formats. Just-In-Time (JIT) compilation was incorporated to speed up the execution of script code, but is also used besides web browsers for performance reasons. Attackers happily welcomed JIT in their own way, and until today, JIT compilers are an important target of various attacks. This includes for example JIT-Spray, JIT-based code-reuse attacks and JIT-specific flaws to circumvent mitigation techniques in order to simplify the exploitation of memory-corruption vulnerabilities. Furthermore, JIT compilers are complex and provide a large attack surface, which is visible in the steady stream of critical bugs appearing in them. In this paper, we survey and systematize the jungle of JIT compilers of major (client-side) programs, and provide a categorization of offensive techniques for abusing JIT compilation. Thereby, we present techniques used in academic as well as in non-academic works which try to break various defenses against memory-corruption vulnerabilities. Additionally, we discuss what mitigations arouse to harden JIT compilers to impede exploitation by skilled attackers wanting to abuse Just-In-Time compilers.", "title": "" }, { "docid": "9c540b058e851cd9fa3a0195b039b965", "text": "The proposed active learning framework learns scene and object classification models simultaneously. Both scene and object classification models take advantage of the interdependence between them to select the most informative samples with the least manual labeling cost. To the best of our knowledge, any previous work using active learning to classify scene and objects together is unknown. Leveraging upon the inter-relationships between scene and objects, we propose a new information-theoretic sample selection strategy. [Figure] This figure presents a pictorial representation of the proposed framework. Overview of Our Joint Active Learning Framework", "title": "" }, { "docid": "36e5cd6aac9b0388f67a9584d9bf0bf6", "text": "To learn to program, a novice programmer must understand the dynamic, runtime aspect of program code, a so-called notional machine. Understanding the machine can be easier when it is represented graphically, and tools have been developed to this end. However, these tools typically support only one programming language and do not work in a web browser. In this article, we present the functionality and technical implementation of the two visualization tools. First, the language-agnostic and extensible Jsvee library helps educators visualize notional machines and create expression-level program animations for online course materials. Second, the Kelmu toolkit can be used by ebook authors to augment automatically generated animations, for instance by adding annotations such as textual explanations and arrows. Both of these libraries have been used in introductory programming courses, and there is preliminary evidence that students find the animations useful.", "title": "" }, { "docid": "81b242e3c98eaa20e3be0a9777aa3455", "text": "Humor is an integral part of human lives. Despite being tremendously impactful, it is perhaps surprising that we do not have a detailed understanding of humor yet. As interactions between humans and AI systems increase, it is imperative that these systems are taught to understand subtleties of human expressions such as humor. In this work, we are interested in the question - what content in a scene causes it to be funny? As a first step towards understanding visual humor, we analyze the humor manifested in abstract scenes and design computational models for them. We collect two datasets of abstract scenes that facilitate the study of humor at both the scene-level and the object-level. We analyze the funny scenes and explore the different types of humor depicted in them via human studies. We model two tasks that we believe demonstrate an understanding of some aspects of visual humor. The tasks involve predicting the funniness of a scene and altering the funniness of a scene. We show that our models perform well quantitatively, and qualitatively through human studies. Our datasets are publicly available.", "title": "" }, { "docid": "3e26fe227e8c270fda4fe0b7d09b2985", "text": "With the recent emergence of mobile platforms capable of executing increasingly complex software and the rising ubiquity of using mobile platforms in sensitive applications such as banking, there is a rising danger associated with malware targeted at mobile devices. The problem of detecting such malware presents unique challenges due to the limited resources avalible and limited privileges granted to the user, but also presents unique opportunity in the required metadata attached to each application. In this article, we present a machine learning-based system for the detection of malware on Android devices. Our system extracts a number of features and trains a One-Class Support Vector Machine in an offline (off-device) manner, in order to leverage the higher computing power of a server or cluster of servers.", "title": "" }, { "docid": "f324a61fcdbfc00aecdfdceb412000c7", "text": "A path profile determines how many times each acyclic path in a routine executes. This type of profiling subsumes the more common basic block and edge profiling, which only approximate path frequencies. Path profiles have many potential uses in program performance tuning, profile-directed compilation, and software test coverage. This paper describes a new algorithm for path profiling. This simple, fast algorithm selects and places profile instrumentation to minimize run-time overhead. Instrumented programs run with overhead comparable to the best previous profiling techniques. On the SPEC95 benchmarks, path profiling overhead averaged 31%, as compared to 16% for efficient edge profiling. Path profiling also identifies longer paths than a previous technique, which predicted paths from edge profiles (average of 88, versus 34 instructions). Moreover, profiling shows that the SPEC95 train input datasets covered most of the paths executed in the ref datasets.", "title": "" }, { "docid": "3224233a8a91c8d44e366b7b2ab8e7a1", "text": "In this work we describe the scenario of fully-immersive desktop VR, which serves the overall goal to seamlessly integrate with existing workflows and workplaces of data analysts and researchers, such that they can benefit from the gain in productivity when immersed in their data-spaces. Furthermore, we provide a literature review showing the status quo of techniques and methods available for realizing this scenario under the raised restrictions. Finally, we propose a concept of an analysis framework and the decisions made and the decisions still to be taken, to outline how the described scenario and the collected methods are feasible in a real use case.", "title": "" }, { "docid": "986a682b195943f3b0dbb120087511a4", "text": "The paper discusses three main adaptive filtering algorithms with partial updates and low computational complexities that converge fast and have a significantly better mean square error (MSE) performance than their non selective-update versions when they are tuned well. The algorithms are set-membership normalized least mean squares (SM-NLMS), SM affine projection (SM-AP) and SM recursive least squares (SM-RLS, also known as BEACON). The lifetime of a wireless sensor network (WSN) is often governed by its power consumption. We show how the previous works for energy prediction, channel estimation, localization and data replication in WSNs can be improved in both accuracy and energy conservation by employing these algorithms. We derive two simplified versions of the SM-AP and BEACON algorithms to further minimize the computational load. The probable drawbacks of the algorithms and the alternative solutions are also investigated. To exhibit the improvements and compare the algorithms, computer simulations are conducted for different scenarios. The purpose is to show that many signal processing algorithms for WSNs can be replaced by one general low complexity algorithm which can perform different tasks by minor parameter adjustments.", "title": "" } ]
scidocsrr
85876bcecb7770297af774a2ab9a259a
SAMM: A Spontaneous Micro-Facial Movement Dataset
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "4ddad3c97359faf4b927167800fe77be", "text": "Micro-expressions are facial expressions which are fleeting and reveal genuine emotions that people try to conceal. These are important clues for detecting lies and dangerous behaviors and therefore have potential applications in various fields such as the clinical field and national security. However, recognition through the naked eye is very difficult. Therefore, researchers in the field of computer vision have tried to develop micro-expression detection and recognition algorithms but lack spontaneous micro-expression databases. In this study, we attempted to create a database of spontaneous micro-expressions which were elicited from neutralized faces. Based on previous psychological studies, we designed an effective procedure in lab situations to elicit spontaneous micro-expressions and analyzed the video data with care to offer valid and reliable codings. From 1500 elicited facial movements filmed under 60fps, 195 micro-expressions were selected. These samples were coded so that the first, peak and last frames were tagged. Action units (AUs) were marked to give an objective and accurate description of the facial movements. Emotions were labeled based on psychological studies and participants' self-report to enhance the validity.", "title": "" } ]
[ { "docid": "916fd932ae299b30f322aed6b5f35a9c", "text": "This paper proposes a novel parametric warp which is a spatial combination of a projective transformation and a similarity transformation. Given the projective transformation relating two input images, based on an analysis of the projective transformation, our method smoothly extrapolates the projective transformation of the overlapping regions into the non-overlapping regions and the resultant warp gradually changes from projective to similarity across the image. The proposed warp has the strengths of both projective and similarity warps. It provides good alignment accuracy as projective warps while preserving the perspective of individual image as similarity warps. It can also be combined with more advanced local-warp-based alignment methods such as the as-projective-as-possible warp for better alignment accuracy. With the proposed warp, the field of view can be extended by stitching images with less projective distortion (stretched shapes and enlarged sizes).", "title": "" }, { "docid": "fdb3b7e8a657b81fd4242359fbf4032a", "text": "This paper investigates how to effectively do cross lingual text classification by leveraging a large scale and multilingual knowledge base, Wikipedia. Based on the observation that each Wikipedia concept is described by documents of different languages, we adapt existing topic modeling algorithms for mining multilingual topics from this knowledge base. The extracted topics have multiple types of representations, with each type corresponding to one language. In this work, we regard such topics extracted from Wikipedia documents as universal-topics, since each topic corresponds with same semantic information of different languages. Thus new documents of different languages can be represented in a space using a group of universal-topics. We use these universal-topics to do cross lingual text classification. Given the training data labeled for one language, we can train a text classifier to classify the documents of another language by mapping all documents of both languages into the universal-topic space. This approach does not require any additional linguistic resources, like bilingual dictionaries, machine translation tools, or labeling data for the target language. The evaluation results indicate that our topic modeling approach is effective for building cross lingual text classifier.", "title": "" }, { "docid": "d90467d05b4df62adc94b7c150013968", "text": "Bacterial flagella and type III secretion system (T3SS) are evolutionarily related molecular transport machineries. Flagella mediate bacterial motility; the T3SS delivers virulence effectors to block host defenses. The inflammasome is a cytosolic multi-protein complex that activates caspase-1. Active caspase-1 triggers interleukin-1β (IL-1β)/IL-18 maturation and macrophage pyroptotic death to mount an inflammatory response. Central to the inflammasome is a pattern recognition receptor that activates caspase-1 either directly or through an adapter protein. Studies in the past 10 years have established a NAIP-NLRC4 inflammasome, in which NAIPs are cytosolic receptors for bacterial flagellin and T3SS rod/needle proteins, while NLRC4 acts as an adapter for caspase-1 activation. Given the wide presence of flagella and the T3SS in bacteria, the NAIP-NLRC4 inflammasome plays a critical role in anti-bacteria defenses. Here, we review the discovery of the NAIP-NLRC4 inflammasome and further discuss recent advances related to its biochemical mechanism and biological function as well as its connection to human autoinflammatory disease.", "title": "" }, { "docid": "8047032f0ef24d5d32ae3a5eae3e4bf3", "text": "BACKGROUND\nFox-Fordyce disease (FFD) is a relatively rare entity with a typical clinical presentation. Numerous studies have described unifying histopathological features of FFD, which together suggest a defect in the follicular infundibulum resulting in follicular dilation with keratin plugging, subsequent apocrine duct obstruction, and apocrine gland dilation, with eventual extravasation of the apocrine secretions as the primary histopathogenic events in the evolution of the disease.\n\n\nOBSERVATIONS\nWe describe a case of FFD that developed in a 41-year-old woman 3 months after completing a series of axillary laser hair removal treatments, and we detail the clinical and histopathological changes typical for FFD.\n\n\nCONCLUSION\nBecause defective infundibular maturation has been suggested to play a central role in the evolution of FFD, the close temporal relationship of laser hair therapy with the development of FFD suggests a causal role, which we continue to explore.", "title": "" }, { "docid": "3d5ab2c686c11527296537b4c8396ae2", "text": "This study investigated writing beliefs, self-regulatory behaviors, and epistemology beliefs of preservice teachers in academic writing tasks. Students completed self-report measures of selfregulation, epistemology, and beliefs about writing. Both knowledge and regulation of cognition were positively related to writing enjoyment, and knowledge of cognition was negatively related to beliefs of ability as a fixed entity. Enjoyment of writing was related to learnability and selfassessment. It may be that students who are more self-regulated during writing also believe they can learn to improve their writing skills. It may be, however, that students who believe writing is learnable will exert the effort to self-regulate during writing. Student beliefs and feelings about learning and writing play an important and complex role in their self-regulation behaviors. Suggestions for instruction are included, and continued research of students’ beliefs and selfregulation in naturalistic contexts is recommended.", "title": "" }, { "docid": "e629f1935ab4f69ffaefdaa59b374a05", "text": "Higher-order low-rank tensors naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic data reconstruction, and so on. We propose a new model to recover a low-rank tensor by simultaneously performing low-rank matrix factorizations to the all-mode matricizations of the underlying tensor. An alternating minimization algorithm is applied to solve the model, along with two adaptive rank-adjusting strategies when the exact rank is not known. Phase transition plots reveal that our algorithm can recover a variety of synthetic low-rank tensors from significantly fewer samples than the compared methods, which include a matrix completion method applied to tensor recovery and two state-of-the-art tensor completion methods. Further tests on real-world data show similar advantages. Although our model is non-convex, our algorithm performs consistently throughout the tests and give better results than the compared methods, some of which are based on convex models. In addition, the global convergence of our algorithm can be established in the sense that the gradient of Lagrangian function converges to zero.", "title": "" }, { "docid": "1463e545177c0ad5ab87c394b504b1ee", "text": "The term Cyber-Physical Systems (CPS) typically refers to engineered, physical and biological systems monitored and/or controlled by an embedded computational core. The behaviour of a CPS over time is generally characterised by the evolution of physical quantities, and discrete software and hardware states. In general, these can be mathematically modelled by the evolution of continuous state variables for the physical components interleaved with discrete events. Despite large effort and progress in the exhaustive verification of such hybrid systems, the complexity of CPS models limits formal verification of safety of their behaviour only to small instances. An alternative approach, closer to the practice of simulation and testing, is to monitor and to predict CPS behaviours at simulation-time or at runtime. In this chapter, we summarise the state-of-the-art techniques for qualitative and quantitative monitoring of CPS behaviours. We present an overview of some of the important applications and, finally, we describe the tools supporting CPS monitoring and compare their main features.", "title": "" }, { "docid": "34a5d59c8b72690c7d776871447af6d0", "text": "E lectronic commerce lets people purchase goods and exchange information on business transactions online. The most popular e-commerce channel is the Internet. Although the Internet's role as a business channel is a fairly recent phenomenon, its impact, financial and otherwise, has been substantially greater than that of other business channels in existence for several decades. E-commerce gives companies improved efficiency and reliability of business processes through transaction automation. There are two major types of e-commerce: business to consumer (B2C), in which consumers purchase products and services from businesses , and business to business (B2B), in which businesses buy and sell among themselves. A typical business depends on other businesses for several of the direct and indirect inputs to its end products. For example, Dell Computer depends on one company for microprocessor chips and another for hard drives. B2B e-commerce automates and streamlines the process of buying and selling these intermediate products. It provides more reliable updating of business data. For procurement transactions, buyers and sellers can meet in an electronic marketplace and exchange information. In addition, B2B makes product information available globally and updates it in real time. Hence, procuring organizations can take advantage of vast amounts of product information. B2C e-commerce is now sufficiently stable. Judging from its success, we can expect B2B to similarly improve business processes for a better return on investment. Market researchers predict that B2B transactions will amount to a few trillion dollars in the next few years, as compared to about 100 billion dollars' worth of B2C transactions. B2C was easier to achieve, given the relative simplicity of reaching its target: the individual consumer. That's not the case with B2B, which involves engineering the interactions of diverse, complex enterprises. Interoperability is therefore a key issue in B2B. To achieve interoperability, many companies have formed consortia to develop B2B frameworks—generic templates that provide functions enabling businesses to communicate efficiently over the Internet. The consor-tia aim to provide an industrywide standard that companies can easily adopt. Their work has resulted in several technical standards. Among the most popular are Open Buying on the Internet (OBI), eCo, RosettaNet, commerce XML (cXML), and BizTalk. The problem with these standards, and many others, is that they are incompatible. Businesses trying to implement a B2B framework are bewildered by a variety of standards that point in different directions. Each standard has its merits and demerits. To aid decision-makers in choosing …", "title": "" }, { "docid": "53595cdb8e7a9e8ee2debf4e0dda6d45", "text": "Botnets have become one of the major attacks in the internet today due to their illicit profitable financial gain. Meanwhile, honeypots have been successfully deployed in many computer security defence systems. Since honeypots set up by security defenders can attract botnet compromises and become spies in exposing botnet membership and botnet attacker behaviours, they are widely used by security defenders in botnet defence. Therefore, attackers constructing and maintaining botnets will be forced to find ways to avoid honeypot traps. In this paper, we present a hardware and software independent honeypot detection methodology based on the following assumption: security professionals deploying honeypots have a liability constraint such that they cannot allow their honeypots to participate in real attacks that could cause damage to others, while attackers do not need to follow this constraint. Attackers could detect honeypots in their botnets by checking whether compromised machines in a botnet can successfully send out unmodified malicious traffic. Based on this basic detection principle, we present honeypot detection techniques to be used in both centralised botnets and Peer-to-Peer (P2P) structured botnets. Experiments show that current standard honeypots and honeynet programs are vulnerable to the proposed honeypot detection techniques. At the end, we discuss some guidelines for defending against general honeypot-aware attacks.", "title": "" }, { "docid": "fcbb5b1adf14b443ef0d4a6f939140fe", "text": "In this paper we make the case for IoT edge offloading, which strives to exploit the resources on edge computing devices by offloading fine-grained computation tasks from the cloud closer to the users and data generators (i.e., IoT devices). The key motive is to enhance performance, security and privacy for IoT services. Our proposal bridges the gap between cloud computing and IoT by applying a divide and conquer approach over the multi-level (cloud, edge and IoT) information pipeline. To validate the design of IoT edge offloading, we developed a unikernel-based prototype and evaluated the system under various hardware and network conditions. Our experimentation has shown promising results and revealed the limitation of existing IoT hardware and virtualization platforms, shedding light on future research of edge computing and IoT.", "title": "" }, { "docid": "6196444488388da0ab6a6b79d05af6e0", "text": "Data mining techniques are becoming very popular nowadays because of the wide availability of huge quantity of data and the need for transforming such data into knowledge. In today’s globalization, core banking model and cut throat competition making banks to struggling to gain a competitive edge over each other. The face to face interaction with customer is no more exists in the modern banking world. Banking systems collect huge amounts of data on day to day basis, be it customer information, transaction details like deposits and withdrawals, loans, risk profiles, credit card details, credit limit and collateral details related information. Thousands of decisions are taken in a bank on daily basis. In recent years the ability to generate, capture and store data has increased enormously. The information contained in this data can be very important. The wide availability of huge amounts of data and the need for transforming such data into knowledge encourage IT industry to use data mining. Lending is the primary business of the banks. Credit Risk Management is one of the most important and critical factor in banking world. Without proper credit risk management banks will face huge losses and lending becomes very tough for the banks. Data mining techniques are greatly used in the banking industry which helps them compete in the market and provide the right product to the right customer with less risk. Credit risks which account for the risk of loss and loan defaults are the major source of risk encountered by banking industry. Data mining techniques like classification and prediction can be applied to overcome this to a great extent. In this paper we introduce an effective prediction model for the bankers that help them predict the credible customers who have applied for loan. Decision Tree Induction Data Mining Algorithm is applied to predict the attributes relevant for credibility. A prototype of the model is described in this paper which can be used by the organizations in making the right decision to approve or reject the loan request of the customers. Keywords— Banking industry; Data Mining; Risk Management; Classification; Credit Scoring; Non-Performing Assets; Default Detection; Non-Performing Loans Decision Tree; Credit Risk Assessment; Classification; Prediction --------------------------------------------------------------------***----------------------------------------------------------", "title": "" }, { "docid": "74d7f4b3cc7458c35120e83acbd74f08", "text": "Machine learning (ML) has the potential to revolutionize the field of radiation oncology, but there is much work to be done. In this article, we approach the radiotherapy process from a workflow perspective, identifying specific areas where a data-centric approach using ML could improve the quality and efficiency of patient care. We highlight areas where ML has already been used, and identify areas where we should invest additional resources. We believe that this article can serve as a guide for both clinicians and researchers to start discussing issues that must be addressed in a timely manner.", "title": "" }, { "docid": "df99d221aa2f31f03a059106991a1728", "text": "With the advancement of mobile computing technology and cloud-based streaming music service, user-centered music retrieval has become increasingly important. User-specific information has a fundamental impact on personal music preferences and interests. However, existing research pays little attention to the modeling and integration of user-specific information in music retrieval algorithms/models to facilitate music search. In this paper, we propose a novel model, named User-Information-Aware Music Interest Topic (UIA-MIT) model. The model is able to effectively capture the influence of user-specific information on music preferences, and further associate users' music preferences and search terms under the same latent space. Based on this model, a user information aware retrieval system is developed, which can search and re-rank the results based on age- and/or gender-specific music preferences. A comprehensive experimental study demonstrates that our methods can significantly improve the search accuracy over existing text-based music retrieval methods.", "title": "" }, { "docid": "bb6857df2dbcb19228e80a410a1fc6d6", "text": "We introduce a new large-scale data set of video URLs with densely-sampled object bounding box annotations called YouTube-BoundingBoxes (YT-BB). The data set consists of approximately 380,000 video segments about 19s long, automatically selected to feature objects in natural settings without editing or post-processing, with a recording quality often akin to that of a hand-held cell phone camera. The objects represent a subset of the COCO [32] label set. All video segments were human-annotated with high-precision classification labels and bounding boxes at 1 frame per second. The use of a cascade of increasingly precise human annotations ensures a label accuracy above 95% for every class and tight bounding boxes. Finally, we train and evaluate well-known deep network architectures and report baseline figures for per-frame classification and localization. We also demonstrate how the temporal contiguity of video can potentially be used to improve such inferences. The data set can be found at https://research.google.com/youtube-bb. We hope the availability of such large curated corpus will spur new advances in video object detection and tracking.", "title": "" }, { "docid": "950fc111eb871c418fb2d1c28dfe7fea", "text": "There is an increasing demand for goal-oriented conversation systems which can assist users in various day-to-day activities such as booking tickets, restaurant reservations, shopping, etc. Most of the existing datasets for building such conversation systems focus on monolingual conversations and there is hardly any work on multilingual and/or code-mixed conversations. Such datasets and systems thus do not cater to the multilingual regions of the world, such as India, where it is very common for people to speak more than one language and seamlessly switch between them resulting in code-mixed conversations. For example, a Hindi speaking user looking to book a restaurant would typically ask, “Kya tum is restaurant mein ek table book karne mein meri help karoge?” (“Can you help me in booking a table at this restaurant?”). To facilitate the development of such code-mixed conversation models, we build a goal-oriented dialog dataset containing code-mixed conversations. Specifically, we take the text from the DSTC2 restaurant reservation dataset and create code-mixed versions of it in Hindi-English, Bengali-English, Gujarati-English and Tamil-English. We also establish initial baselines on this dataset using existing state of the art models. This dataset along with our baseline implementations is made publicly available for research purposes.", "title": "" }, { "docid": "d99dc9140d22538e2ed6b903b8d1df50", "text": "Crowdfunding websites like Kickstarter, Spot.Us and Donor's Choose seek to fund multiple projects simultaneously by soliciting donations from a large number of donors. Crowdfunding site designers must decide what to do with donations to projects that don't reach their goal by the deadline. Some crowdfunding sites use an all-or-nothing return rule in which donations are returned to donors if a project doesn't meet its goal. Other sites use a direct donation structure where all donations are kept by the project even if the total is insufficient. We simulated a crowdfunding site using a threshold public goods game in which a set of donors tries to fund multiple projects that vary in riskiness. We find that the return rule mechanism leads to a marginal improvement in productivity of a site -- more money is donated in total -- by eliciting more donations. However, the return rule also leads to a potential loss in efficiency (percentage of projects funded) because donations become spread across too many projects and are not coordinated to achieve the maximum possible impact. The direct donation model, though, encourages donors to coordinate to creates a more efficient but slightly less productive marketplace.", "title": "" }, { "docid": "b0b1139c48bbe2286096a7e795d4d0cb", "text": "This chapter identifies the most robust conclusions and ideas about adolescent development and psychological functioning that have emerged since Petersen's 1988 review. We begin with a discussion of topics that have dominated recent research, including adolescent problem behavior, parent-adolescent relations, puberty, the development of the self, and peer relations. We then identify and examine what seem to us to be the most important new directions that have come to the fore in the last decade, including research on diverse populations, contextual influences on development, behavioral genetics, and siblings. We conclude with a series of recommendations for future research on adolescence.", "title": "" }, { "docid": "93aaea4fc6c617c078a858baafd22d22", "text": "Network system designers need to understand the error performance of wireless mobile channels in order to improve the quality of communications by deploying better modulation and coding schemes, and better network architectures. It is also desirable to have an accurate and thoroughly reproducible error model, which would allow network designers to evaluate a protocol or algorithm and its variations in a controlled and repeatable way. However, the physical properties of radio propagation, and the diversities of error environments in a wireless medium, lead to complexity in modeling the error performance of wireless channels. This article surveys the error modeling methods of fading channels in wireless communications, and provides a novel user-requirement (researchers and designers) based approach to classify the existing wireless error models.", "title": "" }, { "docid": "118526b566b800d9dea30d2e4c904feb", "text": "With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, in this case domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper we introduce a query based, Arabic text, single document summarization using an existing Arabic language thesaurus and an extracted knowledge base. We use an Arabic corpus to extract domain knowledge represented by topic related concepts/ keywords and the lexical relations among them. The user’s query is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion. For the summarization dataset, Essex Arabic Summaries Corpus was used. It has many topic based articles with multiple human summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet.", "title": "" }, { "docid": "2fd16e94706bec951c2e194974249c42", "text": "This paper presents a novel design of ternary logic inverters using carbon nanotube FETs (CNTFETs). Multiple-valued logic (MVL) circuits have attracted substantial interest due to the capability of increasing information content per unit area. In the past extensive design techniques for MVL circuits (especially ternary logic inverters) have been proposed for implementation in CMOS technology. In CNTFET device, the threshold voltage of the transistor can be controlled by controlling the chirality vector (i.e. the diameter); in this paper this feature is exploited to design ternary logic inverters. New designs are proposed and compared with existing CNTFET-based designs. Extensive simulation results using SPICE demonstrate that power delay product is improved by 300% comparing to the conventional ternary gate design.", "title": "" } ]
scidocsrr
13fa0840e1fba445cc663953f5cfa193
Lane marking detection based on adaptive threshold segmentation and road classification
[ { "docid": "d01fe3897f0f09fc023d943ece518e6e", "text": "In this paper, we propose an efficient lane detection algorithm for lane departure detection; this algorithm is suitable for low computing power systems like automobile black boxes. First, we extract candidate points, which are support points, to extract a hypotheses as two lines. In this step, Haar-like features are used, and this enables us to use an integral image to remove computational redundancy. Second, our algorithm verifies the hypothesis using defined rules. These rules are based on the assumption that the camera is installed at the center of the vehicle. Finally, if a lane is detected, then a lane departure detection step is performed. As a result, our algorithm has achieved 90.16% detection rate; the processing time is approximately 0.12 milliseconds per frame without any parallel computing.", "title": "" } ]
[ { "docid": "4a779f5e15cc60f131a77c69e09e54bc", "text": "We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods by using total variation regularization. We obtain rigorous convergence results and effective stopping criteria for the general procedure. The numerical results for denoising appear to give significant improvement over standard models, and preliminary results for deblurring/denoising are very encouraging.", "title": "" }, { "docid": "69e87ea7f07f96088486b7dd9105841b", "text": "When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.", "title": "" }, { "docid": "2639c6ed94ad68f5e0c4579f84f52f35", "text": "This article introduces the Swiss Army Menu (SAM), a radial menu that enables a very large number of functions on a single small tactile screen. The design of SAM relies on four different kinds of items, support for navigating in hierarchies of items and a control based on small thumb movements. SAM can thus offer a set of functions so large that it would typically have required a number of widgets that could not have been displayed in a single viewport at the same time.", "title": "" }, { "docid": "62d9add3a14100d57fc9d1c1342029e3", "text": "A multidimensional access method offering significant performance increases by intelligently partitioning the query space is applied to relational database management systems (RDBMS). We introduce a formal model for multidimensional partitioned relations and discuss several typical query patterns. The model identifies the significance of multidimensional range queries and sort operations. The discussion of current access methods gives rise to the need for a multidimensional partitioning of relations. A detailed analysis of space partitioning focussing especially on Z-ordering illustrates the principle benefits of multidimensional indexes. After describing the UB-Tree and its standard algorithms for insertion, deletion, point queries, and range queries, we introduce the spiral algorithm for nearest neighbor queries with UB-Trees and the Tetris algorithm for efficient access to a table in arbitrary sort order. We then describe the complexity of the involved algorithms and give solutions to selected algorithmic problems for a prototype implementation of UB-Trees on top of several RDBMSs. A cost model for sort operations with and without range restrictions is used both for analyzing our algorithms and for comparing UB-Trees with state-of-the-art query processing. Performance comparisons with traditional access methods practically confirm the theoretically expected superiority of UB-Trees and our algorithms over traditional access methods: Query processing in RDBMS is accelerated by several orders of magnitude, while the resource requirements in main memory space and disk space are substantially reduced. Benchmarks on some queries of the TPC-D benchmark as well as the data warehousing scenario of a fruit juice company illustrate the potential impact of our work on relational algebra, SQL, and commercial applications. The results of this thesis were developed by the author managing the MISTRAL project, a joint research and development project with SAP AG (Germany), Teijin Systems Technology Ltd. (Japan), NEC (Japan), Hitachi (Japan), Gesellschaft für Konsumforschung (Germany), and TransAction Software GmbH (Germany).", "title": "" }, { "docid": "7d2bc65a5d4b05380d9e150bae63c7d3", "text": "Purpose. The possibility of using polysorbate 80-coated nanoparticles for the delivery of the water insoluble opioid agonist loperamide across the blood-brain barrier was investigated. The analgesic effect after i.v. injection of the preparations was used to indicate drug transport through this barrier. Methods. Loperamide was incorporated into PBCA nanoparticles. Drug-containing nanoparticles were coated with polysorbate 80 and injected intravenously into mice. Analgesia was then measured by the tail-flick test. Results. Intravenous injection of the particulate formulation resulted in a long and significant analgesic effect. A polysorbate 80 loperamide solution induced a much less pronounced and very short analgesia. Uncoated nanoparticles loaded with loperamide were unable to produce analgesia. Conclusions. Polysorbate 80-coated PBCA nanoparticles loaded with loperamide enabled the transport of loperamide to the brain.", "title": "" }, { "docid": "496864f6ccafbc23e52d8cead505eac7", "text": "Hotel guests’ expectations and actual experiences on hotel service quality often fail to coincide due to guests’ unusually high anticipations, hotels’ complete breakdowns in delivering their standard, or the combination of both. Moreover, this disconfirmation could be augmented contingent upon the level of hotel segment (hotel star-classification) and the overall rating manifested by previous guests. By incorporating a 2 2 matrix design in which a hotel star-classification configures one dimension (2 versus 4 stars) and a customers’ overall rating (lower versus higher overall ratings) configures the other, this explorative multiple case study uses conjoint analyses to examine the differences in the comparative importance of the six hotel attributes (value, location, sleep quality, rooms, cleanliness, and service) among four prominent hotel chain brands located in the United States. Four major and eight minor propositions are suggested for future empirical research based on the results of the four combined studies. Through the analysis of online data, this study may enlighten hotel managers with various ways to accommodate hotel guests’ needs. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1b6dfa953ee044fceb17640cc862a534", "text": "Introduction The rapid pace at which the technological innovations are being introduced in the world poses a potential challenge to the retailer, supplier, and enterprises. In the field of Information Technology (IT) there is a rapid growth in the last 30 years (Want 2006; Landt 2005). One of the most promising technological innovations in IT is radio frequency identification (RFID) (Dutta et al. 2007; Whitaker et al. 2007; Bottani et al. 2009). The RFID technology was evolved in 1945 as an espionage tool invented by Leon Theremin for the Soviet Government (Nikitin et al. 2013, Tedjini et al. 2012). At that time it was mainly used by the military. The progress in microchip design, antenna technology and radio spread spectrum pushed it into various applications like supply chain management, retail, automatic toll collection by tunnel companies, animal tracking, ski lift access, tracking library books, theft prevention, vehicle immobilizer systems, railway rolling stock identification, movement tracking, security, healthcare, printing, textiles and clothing (Weinstein 2005; Liu and Miao 2006; Rao et al. 2005; Wu et al. 2009; Tan 2008). RFID can make the companies more competitive by changing the related processes in supply chain, manufacturing and retailing. Abstract", "title": "" }, { "docid": "070ecf3890362cb4c24682aff5fa01c6", "text": "This review builds on self-control theory (Carver & Scheier, 1998) to develop a theoretical framework for investigating associations of implicit theories with self-regulation. This framework conceptualizes self-regulation in terms of 3 crucial processes: goal setting, goal operating, and goal monitoring. In this meta-analysis, we included articles that reported a quantifiable assessment of implicit theories and at least 1 self-regulatory process or outcome. With a random effects approach used, meta-analytic results (total unique N = 28,217; k = 113) across diverse achievement domains (68% academic) and populations (age range = 5-42; 10 different nationalities; 58% from United States; 44% female) demonstrated that implicit theories predict distinct self-regulatory processes, which, in turn, predict goal achievement. Incremental theories, which, in contrast to entity theories, are characterized by the belief that human attributes are malleable rather than fixed, significantly predicted goal setting (performance goals, r = -.151; learning goals, r = .187), goal operating (helpless-oriented strategies, r = -.238; mastery-oriented strategies, r = .227), and goal monitoring (negative emotions, r = -.233; expectations, r = .157). The effects for goal setting and goal operating were stronger in the presence (vs. absence) of ego threats such as failure feedback. Discussion emphasizes how the present theoretical analysis merges an implicit theory perspective with self-control theory to advance scholarship and unlock major new directions for basic and applied research.", "title": "" }, { "docid": "c6bd4cd6f90abf20f2619b1d1af33680", "text": "General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively. The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach achieves state-of-the-art action classification performance on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.", "title": "" }, { "docid": "c1956e4c6b732fa6a420d4c69cfbe529", "text": "To improve the safety and comfort of a human-machine system, the machine needs to ‘know,’ in a real time manner, the human operator in the system. The machine’s assistance to the human can be fine tuned if the machine is able to sense the human’s state and intent. Related to this point, this paper discusses issues of human trust in automation, automation surprises, responsibility and authority. Examples are given of a driver assistance system for advanced automobile.", "title": "" }, { "docid": "baf8d2176f8c9058967fb3636022cd72", "text": "The ability to provide assistance for a student at the appropriate level is invaluable in the learning process. Not only does it aids the student's learning process but also prevents problems, such as student frustration and floundering. Students' key demographic characteristics and their marks in a small number of written assignments can constitute the training set for a regression method in order to predict the student's performance. The scope of this work compares some of the state of the art regression algorithms in the application domain of predicting students' marks. A number of experiments have been conducted with six algorithms, which were trained using datasets provided by the Hellenic Open University. Finally, a prototype version of software support tool for tutors has been constructed implementing the M5rules algorithm, which proved to be the most appropriate among the tested algorithms.", "title": "" }, { "docid": "ec756798f319e413ee0b4ead614e51bd", "text": "Neural text classification methods typically treat output classes as categorical labels which lack description and semantics. This leads to an inability to train them well on large label sets or to generalize to unseen labels and makes speed and parameterization dependent on the size of the label set. Joint inputlabel space methods ameliorate the above issues by exploiting label texts or descriptions, but often at the expense of weak performance on the labels seen frequently during training. In this paper, we propose a label-aware text classification model which addresses these issues without compromising performance on the seen labels. The model consists of a joint input-label multiplicative space and a labelset-size independent classification unit and is trained with cross-entropy loss to optimize accuracy. We evaluate our model on text classification for multilingual news and for biomedical text with a large label set. The label-aware model consistently outperforms both monolingual and multilingual classification models which do not leverage label semantics and previous joint input-label space models.", "title": "" }, { "docid": "1705ba479a7ff33eef46e0102d4d4dd0", "text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.", "title": "" }, { "docid": "322f6321bc34750344064d474206fddb", "text": "BACKGROUND AND PURPOSE\nThis study was undertaken to elucidate whether and how age influences stroke outcome.\n\n\nMETHODS\nThis prospective and community-based study comprised 515 consecutive acute stroke patients. Computed tomographic scan was performed in 79% of patients. Activities of daily living (ADL) and neurological status were assessed weekly during hospital stay using the Barthel Index (BI) and the Scandinavian Stroke Scale (SSS), respectively. Information regarding social condition and comorbidity before stroke was also registered. A multiple regression model was used to analyze the independent influence of age on stroke outcome.\n\n\nRESULTS\nAge was not related to the type of stroke lesion or infarct size. However, age independently influenced initial BI (-4 points per 10 years, P < .01), initial SSS (-2 points per 10 years, P = .01), and discharge BI (-3 points per 10 years, P < .01). No independent influence of age was found regarding mortality within 3 months, discharge SSS, length of hospital stay, and discharge placement. ADL improvement was influenced independently by age (-3 points per 10 years, P < .01), whereas age had no influence on neurological improvement or on speed of recovery.\n\n\nCONCLUSIONS\nAge independently influences stroke outcome selectively in ADL-related aspects (BI) but not in neurological aspects (SSS), suggesting a poorer compensatory ability in elderly stroke patients. Therefore, rehabilitation of elderly stroke patients should be focused more on ADL and compensation rather than on the recovery of neurological status, and age itself should not be a selection criterion for rehabilitation.", "title": "" }, { "docid": "7700a97c65a9e6d9e0fe9abea543b1b3", "text": "Opinionated social media such as product reviews are now widely used by individuals and organizations for their decision making. However, due to the reason of profit or fame, people try to game the system by opinion spamming (e.g., writing fake reviews) to promote or to demote some target products. In recent years, fake review detection has attracted significant attention from both the business and research communities. However, due to the difficulty of human labeling needed for supervised learning and evaluation, the problem remains to be highly challenging. This work proposes a novel angle to the problem by modeling spamicity as latent. An unsupervised model, called Author Spamicity Model (ASM), is proposed. It works in the Bayesian setting, which facilitates modeling spamicity of authors as latent and allows us to exploit various observed behavioral footprints of reviewers. The intuition is that opinion spammers have different behavioral distributions than non-spammers. This creates a distributional divergence between the latent population distributions of two clusters: spammers and non-spammers. Model inference results in learning the population distributions of the two clusters. Several extensions of ASM are also considered leveraging from different priors. Experiments on a real-life Amazon review dataset demonstrate the effectiveness of the proposed models which significantly outperform the state-of-the-art competitors.", "title": "" }, { "docid": "c35b97eaf1864c5a1e3b2e0d7fdf65e1", "text": "This introduction to the R package dtw is a (slightly) modified version of Giorgino (2009), published in the Journal of Statistical Software. Dynamic time warping is a popular technique for comparing time series, providing both a distance measure that is insensitive to local compression and stretches and the warping which optimally deforms one of the two input series onto the other. A variety of algorithms and constraints have been discussed in the literature. The dtw package provides an unification of them; it allows R users to compute time series alignments mixing freely a variety of continuity constraints, restriction windows, endpoints, local distance definitions, and so on. The package also provides functions for visualizing alignments and constraints using several classic diagram types.", "title": "" }, { "docid": "1512d9e7065ffa9dd2ceb016fd7ea485", "text": "Laser scanners have been an integral part of MEMS research for more than three decades. During the last decade, miniaturized projection displays and various medical-imaging applications became the main driver for progress in MEMS laser scanners. Portable and truly miniaturized projectors became possible with the availability of red, green, and blue diode lasers during the past few years. Inherent traits of the laser scanning technology, such as the very large color gamut, scalability to higher resolutions within the same footprint, and capability of producing an always-in-focus image render it a very viable competitor in mobile projection. Here, we review the requirements on MEMS laser scanners for the demanding display applications, performance levels of the best scanners in the published literature, and the advantages and disadvantages of electrostatic, electromagnetic, piezoelectric, and mechanically coupled actuation principles. Resonant high-frequency scanners, low-frequency linear scanners, and 2-D scanners are included in this review.", "title": "" }, { "docid": "693ec3ccd9327b3a15b6d57cca2060ba", "text": "Wireless communications are advancing rapidly. We are currently at the verge of a new revolutionary advancement in wireless data communications: the 3 Generation of mobile telecommunications. 3G promises to converge mobile technology with Internet connectivity. Wireless data, multimedia applications and integrated services will be among the major driving forces behind 3G. While wireless communications provide great flexibility and mobility, they often come at the expense of security. This is because wireless communications rely on open and public transmission media that raise further security vulnerabilities in addition to the security threats found in regular wired networks. Existing security schemes in 2G and 3G systems are inadequate, as there is a greater demand to provide a more flexible, reconfigurable and scalable security mechanism that can evolve as fast as mobile hosts are evolving into full-fledged IP-enabled devices. We propose a lightweight, component-based, reconfigurable security mechanism to enhance the security abilities of mobile devices.", "title": "" }, { "docid": "7ad46a50bb98f22760f07de82c6e2035", "text": "Major theories for explaining the organization of semantic memory in the human brain are premised on the often-observed dichotomous dissociation between living and nonliving objects. Evidence from neuroimaging has been interpreted to suggest that this distinction is reflected in the functional topography of the ventral vision pathway as lateral-to-medial activation gradients. Recently, we observed that similar activation gradients also reflect differences among living stimuli consistent with the semantic dimension of graded animacy. Here, we address whether the salient dichotomous distinction between living and nonliving objects is actually reflected in observable measured brain activity or whether previous observations of a dichotomous dissociation were the illusory result of stimulus sampling biases. Using fMRI, we measured neural responses while participants viewed 10 animal species with high to low animacy and two inanimate categories. Representational similarity analysis of the activity in ventral vision cortex revealed a main axis of variation with high-animacy species maximally different from artifacts and with the least animate species closest to artifacts. Although the associated functional topography mirrored activation gradients observed for animate–inanimate contrasts, we found no evidence for a dichotomous dissociation. We conclude that a central organizing principle of human object vision corresponds to the graded psychological property of animacy with no clear distinction between living and nonliving stimuli. The lack of evidence for a dichotomous dissociation in the measured brain activity challenges theories based on this premise.", "title": "" }, { "docid": "54ba9715a8ef99ee7ca259dc60553999", "text": "The proliferation of smartphones and mobile devices embedding different types of sensors sets up a prodigious and distributed sensing platform. In particular, in the last years there has been an increasing necessity to monitor drivers to identify bad driving habits in order to optimize fuel consumption, to reduce CO2 emissions or, indeed, to design new reliable and fair pricing schemes for the insurance market. In this paper, we analyze the driver sensing capacity of smartphones. We propose a mobile tool that makes use of the most common sensors embedded in current smartphones and implement a Fuzzy Inference System that scores the overall driving behavior by combining different fuzzy sensing data.", "title": "" } ]
scidocsrr
f2032109f9923313629d8146ee976dc4
HadoopDB in action: building real world applications
[ { "docid": "25adc988a57d82ae6de7307d1de5bf71", "text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [1] is a popular open-source map-reduce implementation which is being used in companies like Yahoo, Facebook etc. to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language - HiveQL, which are compiled into map-reduce jobs that are executed using Hadoop. In addition, HiveQL enables users to plug in custom map-reduce scripts into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog - Metastore - that contains schemas and statistics, which are useful in data exploration, query optimization and query compilation. In Facebook, the Hive warehouse contains tens of thousands of tables and stores over 700TB of data and is being used extensively for both reporting and ad-hoc analyses by more than 200 users per month.", "title": "" } ]
[ { "docid": "4d8be8e246b3722fea32ad6a084f1b38", "text": "In the past decade, we have come to rely on computers for various safety and security-critical tasks, such as securing our homes, operating our vehicles, and controlling our finances. To facilitate these tasks, chip manufacturers have begun including trusted execution environments (TEEs) in their processors, which enable critical code (e.g., cryptographic functions) to run in an isolated hardware environment that is protected from the traditional operating system (OS) and its applications. While code in the untrusted environment (e.g., Android or Linux) is forbidden from accessing any memory or state within the TEE, the code running in the TEE, by design, has unrestricted access to the memory of the untrusted OS and its applications. However, due to the isolation between these two environments, the TEE has very limited visibility into the untrusted environment’s security mechanisms (e.g., kernel vs. application memory). In this paper, we introduce BOOMERANG, a class of vulnerabilities that arises due to this semantic separation between the TEE and the untrusted environment. These vulnerabilities permit untrusted user-level applications to read and write any memory location in the untrusted environment, including security-sensitive kernel memory, by leveraging the TEE’s privileged position to perform the operations on its behalf. BOOMERANG can be used to steal sensitive data from other applications, bypass security checks, or even gain full control of the untrusted OS. To quantify the extent of this vulnerability, we developed an automated framework for detecting BOOMERANG bugs within the TEEs of popular mobile phones. Using this framework, we were able to confirm the existence of BOOMERANG on four different TEE platforms, affecting hundreds of millions of devices on the market today. Moreover, we confirmed that, in at least two instances, BOOMERANG could be leveraged to completely compromise the untrusted OS (i.e., Android). While the implications of these vulnerabilities are severe, defenses can be quickly implemented by vendors, and we are currently in contact with the affected TEE vendors to deploy adequate fixes. To this end, we evaluated the two most promising defense proposals and their inherent trade-offs. This analysis led the proposal of a novel BOOMERANG defense, addressing the major shortcomings of the existing defenses with minimal performance overhead. Our findings have been reported to and verified by the corresponding vendors, who are currently in the process of creating security patches.", "title": "" }, { "docid": "48bc9441aceba3a67a5f9d4d88755d63", "text": "We present a proof of concept that machine learning techniques can be used to predict the properties of CNOHF energetic molecules from their molecular structures. We focus on a small but diverse dataset consisting of 109 molecular structures spread across ten compound classes. Up until now, candidate molecules for energetic materials have been screened using predictions from expensive quantum simulations and thermochemical codes. We present a comprehensive comparison of machine learning models and several molecular featurization methods - sum over bonds, custom descriptors, Coulomb matrices, Bag of Bonds, and fingerprints. The best featurization was sum over bonds (bond counting), and the best model was kernel ridge regression. Despite having a small data set, we obtain acceptable errors and Pearson correlations for the prediction of detonation pressure, detonation velocity, explosive energy, heat of formation, density, and other properties out of sample. By including another dataset with ≈300 additional molecules in our training we show how the error can be pushed lower, although the convergence with number of molecules is slow. Our work paves the way for future applications of machine learning in this domain, including automated lead generation and interpreting machine learning models to obtain novel chemical insights.", "title": "" }, { "docid": "00eb132ce5063dd983c0c36724f82cec", "text": "This paper analyzes customer product-choice behavior based on the recency and frequency of each customer’s page views on e-commerce sites. Recently, we devised an optimization model for estimating product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints with respect to recency and frequency. This shape-restricted model delivered high predictive performance even when there were few training samples. However, typical e-commerce sites deal in many different varieties of products, so the predictive performance of the model can be further improved by integration of such product heterogeneity. For this purpose, we develop a novel latent-class shape-restricted model for estimating product-choice probabilities for each latent class of products. We also give a tailored expectation-maximization algorithm for parameter estimation. Computational results demonstrate that higher predictive performance is achieved with our latent-class model than with the previous shape-restricted model and common latent-class logistic regression.", "title": "" }, { "docid": "bce7787c5d56985006231471b57926c8", "text": "Isoquercitrin is a rare, natural ingredient with several biological activities that is a key precursor for the synthesis of enzymatically modified isoquercitrin (EMIQ). The enzymatic production of isoquercitrin from rutin catalyzed by hesperidinase is feasible; however, the bioprocess is hindered by low substrate concentration and a long reaction time. Thus, a novel biphase system consisting of [Bmim][BF4]:glycine-sodium hydroxide (pH 9) (10:90, v/v) and glyceryl triacetate (1:1, v/v) was initially established for isoquercitrin production. The biotransformation product was identified using liquid chromatography-mass spectrometry, and the bonding mechanism of the enzyme and substrate was inferred using circular dichroism spectra and kinetic parameters. The highest rutin conversion of 99.5% and isoquercitrin yield of 93.9% were obtained after 3 h. The reaction route is environmentally benign and mild, and the biphase system could be reused. The substrate concentration was increased 2.6-fold, the reaction time was reduced to three tenths the original time. The three-dimensional structure of hesperidinase was changed in the biphase system, which α-helix and random content were reduced and β-sheet content was increased. Thus, the developed biphase system can effectively strengthen the hesperidinase-catalyzed synthesis of isoquercitrin with high yield.", "title": "" }, { "docid": "9de0e4e9667745bddc2b1f5683b4a6cb", "text": "Electronic textile (e-textile) toolkits have been successful in broadening participation in STEAM-related activities, in expanding perceptions of computing, and in engaging users in creative, expressive, and meaningful digital-physical design. While a range of well-designed e-textile toolkits exist (e.g., LilyPad), they cater primarily to adults and older children and have a high barrier of entry for some users. We are investigating new approaches to support younger children (K-4) in the creative design, play, and customization of e-textiles and wearables without requiring the creation of code. This demo paper presents one such example of ongoing work: MakerShoe, an e-textile platform for designing shoe-based interactive wearable experiences. We discuss our two participatory design sessions as well as our initial prototype, which uses single-function magnetically attachable electronic modules to support circuit creation and the design of responsive, interactive behaviors.", "title": "" }, { "docid": "5bd2ca6168ffd48c17c1178452a230bc", "text": "Functional imaging studies have examined which brain regions respond to emotional stimuli, but they have not determined how stable personality traits moderate such brain activation. Two personality traits, extraversion and neuroticism, are strongly associated with emotional experience and may thus moderate brain reactivity to emotional stimuli. The present study used functional magnetic resonance imaging to directly test whether individual differences in brain reactivity to emotional stimuli are correlated with extraversion and neuroticism in healthy women. Extraversion was correlated with brain reactivity to positive stimuli in localized brain regions, and neuroticism was correlated with brain reactivity to negative stimuli in localized brain regions. This study provides direct evidence that personality is associated with brain reactivity to emotional stimuli and identifies both common and distinct brain regions where such modulation takes place.", "title": "" }, { "docid": "79a6b8516bdb9d58cfec753ddf3da008", "text": "grammatical relations have been studied for thous and of years. Apollonius Dyscolus, a grammarian in Alexandria in the second ce ntury A .D., gave a syntactic description of Greek that characterized the rela tions of nouns to verbs and other words in the sentence, providing an early characte riza ion of transitivity and “foreshadow[ing] the distinction of subject and obj ect” (Robins 1967). The role of the subject and object and the relation of syntact ic predication were fully developed in the Middle Ages by the modistae, or specul ative grammarians (Robins 1967; Covington 1984). More recent work also depends on assuming an underlying abst ract regularity operating crosslinguistically. Modern work on grammatica l relations and syntactic dependencies was pioneered by Tesnière (1959) and c ontinues in the work of Hudson (1984), Mel’čuk (1988), and others working withi n the dependencybased tradition. Typological studies are also frequently d riven by reference to grammatical relations: for instance, Greenberg (1966) sta te his word order universals by reference to subject and object. Thus, LFG aligns itself with approaches in traditional, nontransformational grammatica l work, in which these abstract relations were assumed. 1.1. Distinctions among Grammatical Functions It is abundantly clear that there are differences in the beha vior of phrases depending on their grammatical function. For example, in lang uages exhibiting “superiority” effects, there is an asymmetry between subje cts and nonsubjects in multiple wh-questions , questions with more than one wh-phrase. It is not possible for the object phrase in a wh-question to appear in initial po sition in the sentence if the subject is also a wh-phrase like whator who(Chomsky 1981, Chapter 4): (1) a. Who saw what? b. *What did who see? Not all languages exhibit these effects: for example, King ( 1995, page 56) shows that superiority effects do not hold in Russian. Neverthele ss, many languages do exhibit an asymmetry between subjects and nonsubjects in co nstructions like (1). In fact, however, the subject-nonsubject distinction is on ly e aspect of a rich set of distinctions among grammatical functions. Keenan an d Comrie (1977) propose a more fine-grained analysis of abstract grammatical st ructure, theKeenanComrie hierarchyfor relative clause formation. The Keenan-Comrie hierarch y gives a ranking on grammatical functions that constrains re lativ clause formation by restricting the grammatical function of the argumen t in the relative clause that is interpreted as coreferent with the modified noun. The border between any Functional Information and Functional Structure 9 two adjacent grammatical functions in the hierarchy can rep resent a distinction between acceptable and unacceptable relative clauses in a l anguage, and different languages can set the border at different places on the hiera rchy:1 (2) Keenan-Comrie Hierarchy: SUBJ> DO > IO > OBL > GEN > OCOMP Keenan and Comrie state that “the positions on the Accessibi lity H erarchy are to be understood as specifying a set of possible grammatical di stinctions that a language may make.” In some languages, the hierarchy distingui shes subjects from all other grammatical functions: only the subject of a relat ive clause can be relativized, or interpreted as coreferent with the noun modified by the relative clause. Other languages allow relativization of subjects and objec ts in contrast to other grammatical functions. This more fine-grained hierarchica l structure refines the subject/nonsubject distinction and allows more functiona l distinctions to emerge. Keenan and Comrie speculate that their hierarchy can be exte nded to other processes besides relative clause formation, and indeed Comri e (1975) applies the hierarchy in an analysis of grammatical functions in causat ive constructions. In fact, the Keenan-Comrie hierarchy closely mirrors the “rel ational hierarchy” of Relational Grammar, as given by Bell (1983), upon which much work in Relational Grammar is based: (3) Relational Hierarchy of Relational Grammar: 1 (SUBJ) > 2 (OBJ) > 3 (indirect object ) The Obliqueness Hierarchy of Head-Driven Phrase Structure Grammar (Pollard and Sag 1994) also reflects a hierarchy of grammatical functi o s l ke this one. As demonstrated by a large body of work in Relational Grammar, H PSG, LFG, and other theories, the distinctions inherent in these hierarc hies are relevant across languages with widely differing constituent structure rep resentations, languages that encode grammatical functions by morphological as well as configurational means. There is a clear and well-defined similarity across la ngu ges at this abstract level. LFG assumes a universally available inventory of grammatic al functions: (4) Lexical Functional Grammar: SUBJect,OBJect,OBJθ, COMP, XCOMP, OBLiqueθ, ADJunct,XADJunct The labelsOBJθ and OBLθ represent families of relations indexed by semantic roles, with theθ subscript representing the semantic role associated with t he ar1The nomenclature that Keenan and Comrie use is slightly diff erent from that used in this book: in their terminology,DO is the direct object, which we call OBJ; IO is the indirect object;OBL is an oblique noun phrase; GEN is a genitive/possessor of an argument; and OCOMP is an object of comparison. 10 2. Functional Structure gument. For instance, OBJTHEME is the member of the group of thematically restrictedOBJθ functions that bears the semantic role THEME, andOBLSOURCE and OBLGOAL are members of theOBLθ group of grammatical functions filling the SOURCEandGOAL semantic roles. Grammatical functions can be cross-classified in several di fferent ways. The governable grammatical functions SUBJ, OBJ, OBJθ, COMP, XCOMP, andOBLθ can besubcategorized , or required, by a predicate; these contrast with modifying adjunctsADJ andXADJ, which are not subcategorizable. The governable grammatical functions form several natural groups. First, one can distinguish thecore argumentsor terms(SUBJ, OBJ, and the family of thematically restricted objects OBJθ) from the family ofnontermor obliquefunctions OBLθ. Crosslinguistically, term functions behave differently from nonterms in constructions involving anaphoric binding (Chapter 11) an d control (Chapter 12); we will discuss other differences between terms and nonterm s in Section 1.3 of this chapter. Second,SUBJ and the primary object function OBJ are thesemantically unrestricted functions, whileOBLθ and the secondary object function OBJθ are restricted to particular thematic or semantic roles, as the θ in their name indicates. Arguments with no semantic content, like the subject it of a sentence likeIt rained, can fill the semantically unrestricted functions, while th is is impossible for the semantically restricted functions. We will discuss thi distinction in Section 1.4 of this chapter. Finally, opengrammatical functions ( XCOMP andXADJ), whose subject is controlled by an argument external to the function, are disting u shed fromclosed functions. These will be discussed in Section 1.7 of this cha pter. Some linguists have considered inputs and outputs of relati on-changing rules like passive to be good tests for grammatical functionhood: f r example, an argument is classified as an object in an active sentence if it ap pe rs as a subject in the corresponding passive sentence, under the assumptio n that the passive rule turns an object into a passive subject. However, as we will di scuss in Chapter 8, grammatical function alternations like passive are best vi ewed not in terms of transformational rules, or even in terms of lexical rules ma nipulating grammatical function assignment, but as alternative means of linking gr ammatical functions to semantic arguments. Therefore, appeal to these processe s a viable diagnostics of grammatical functions requires a thorough understa nding of the theory of argument linking, and these diagnostics must be used with ca re. In the following, we present the inventory of grammatical fu nctions assumed in LFG theory and discuss a variety of grammatical phenomena that make reference to these functions. Some of these phenomena are sensiti ve to a grammatical hierarchy, while others can refer either to specific grammat ical functions or to the members of a larger class of functions. Thus, the same test (f or example, relativizability) might distinguish subjects from all other g ammatical functions in Functional Information and Functional Structure 11 one language, but might pick out both subjects and objects in another language. A number of tests are also specific to particular languages or to particular types of languages: for example, switch-reference constructions, constructions in which a verb is inflected according to whether its subject is corefer ential with the subject of another verb, do not constitute a test for subjecthood in a language in which switch-reference plays no grammatical role. In a theory lik e LFG, grammatical functions are theoretical primitives, not defined in phr asal or semantic terms; therefore, we do not define grammatical functions in terms of a particular, invariant set of syntactic behaviors. Instead, grammatical p henomena can be seen to cluster and distribute according to the grammatical orga nization provided by functional roles. 1.2. Governable Grammatical Functions and Modifiers A major division in grammatical functions distinguishes ar guments of a predicate from modifiers. The arguments are the governable grammatical functions of LFG; they are subcategorized for, or governed, by the predicate. Modifiers modify the phrase with which they appear, but they are not govern ed by the predicate. (5) Governable grammatical functions: SUBJ OBJ XCOMP COMP OBJ θ OBLθ } {{ } ADJ XADJ } {{ } GOVERNABLE GRAMMATICAL FUNCTIONS MODIFIERS Linguists have proposed a number of identifying criteria fo r g vernable grammatical functions. Dowty (1982) proposes two tests to disti ngu sh between go", "title": "" }, { "docid": "05c72978e9b4437c648398d5bb824fed", "text": "In this paper we propose a novel authentication mechanism for session mobility in Next Generation Networks named as Hierarchical Authentication Key Management (HAKM). The design objectives of HAKM are twofold: i) to minimize the authentication latency in NGNs; ii) to provide protection against an assortment of attacks such as denial-of-service attacks, man-in-the-middle attacks, guessing attacks, and capturing node attacks. In order to achieve these objectives, we combine Session Initiation Protocol (SIP) with Hierarchical Mobile IPv6 (HMIPv6) to perform local authentication for session mobility. The concept of group keys and pairwise keys with one way hash function is employed to make HAKM vigorous against the aforesaid attacks. The performance analysis and numerical results demonstrate that HAKM outperforms the existing approaches in terms of latency and protection against the abovementioned attacks.", "title": "" }, { "docid": "b86a70cf8e799607d902c221ecb484ef", "text": "In this supplementary material, we provide additional details for the experimental setup of the paper: Bayesian Optimization with Robust Bayesian Neural Networks. We also present a set of additional plots for each experiment from the main paper. A Extension to parallel Bayesian optimization In this section, we define a variant of our algorithm for settings in which we can perform multiple evaluations of ft in parallel. Utilizing such parallel (and asynchronous) function evaluations in a principled manner for BO is non-trivial as we ideally would like to marginalize over the outcomes of currently running evaluations when suggesting new parameters x for which we want to query ft. To achieve this, Snoek et al. [1] proposed an acquisition function which we refer to as Monte Carlo EI αMCEI that we adopt here for our model. Formally, we estimate αMCEI as αMCEI(x;D, R) = ∫ θ,y αEI ( x;D ∪ {(xi, y i)}xi∈R ) p(y i | xi, θ)p(θ | D)dθdy ≈ 1 M M ∑ k=1 αEI ( x;D ∪ {(xi, y i)}xi∈R ) with y i ∼ p(y i | xi, θ)p(θ | D), (1) where {(xi, y i)}xi∈R is the set of currently running function evaluations (and their predicted targets) and where, in practice, we simply take M = 1 sample. Note that the computation of αEI within the sum again requires multiple samples from the HMC procedure. As for EI we can differentiate through the computation of Equation (1) and maximize it using gradient ascent. B Computational Requirements For any BO method it is important to keep the computational requirements for training and evaluating the model in mind. To this end we want to draw a comparison between our method and DNGO with respect to computational costs. First, SGHMC sampling is similarly cheap as standard SGD training of neural networks, i.e. training a DNGO model from scratch and sampling via SGHMC has similar computational costs. If we were to start sampling from scratch with every incoming data-point and would fix the number of MCMC steps to be equivalent to K runs through the whole dataset then the computational complexity for sampling would grow linearly with the number of data-points. In practice, we warm-start the optimizer during BO with the last sample from the previous evaluation and perform 2000 SGHMC steps (corresponding to 2000 batches) as burn-in, followed by 50 · 50 29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 3 2 1 0 1 2 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 A fit of the sinc function using Variational Inference sinc(x) BBB −3 −2 −1 0 1 2 −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 A fit of the sinc function using Dropout sinc(x) Dropout MC 3 2 1 0 1 2 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 A fit of the sinc function using Probabilistic Backpropagation (PBP) sinc(x) PBP −3 −2 −1 0 1 2 −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 A fit of the sinc function using SGHMC sinc(x) Adaptive SGHMC Figure 1: Four fits of the sinc function from 20 data-points. On the top-left the regression task was solved using our re-implementation of the Bayes by Backprop (BBB) approach from Blundell et al. [2]. On the top-right we used our re-implementation of the Dropout MC approach from Gal and Ghahramani [3]. In the bottom-left probabilistic Backpropagation [4] was used. On the bottom-right is a fit using SGHMC. As it can be observed most methods are overly confident, or have constant uncertainty bands, in large regions of the input space. Note that this function has no observation noise. sampling steps (retaining every 50th sample). This budget was fixed for all tasks. SGHMC sampling is thus slightly faster than DNGO and orders of magnitude faster than GPs (see also the comparison between DNGO and GPs from Snoek et al. [5]); including acquisition function optimization it takes < 30 seconds. With more function evaluations we would increase the budget but expect a runtime of < 2min to select the next point for 50k function evaluations. We note that, if one wants to perform BO in large input spaces (e.g. for ML models with a very large number of parameters) it could be necessary to also increase the size of the used neural network model. C Additional Experiments C.1 Obtaining well calibrated uncertainty estimates with Bayesian neural networks As mentioned in the main paper, there exists a large body of work on Bayesian methods for neural networks. In preliminary experiments, we tried several of these methods to determine which algorithm was capable of providing well calibrated uncertainty estimates. All approximate inference methods we looked at (except for the MCMC variants) exhibited one of two problems (including the variational inference method from Blundell et al. [2], the method from Gal and Ghahramani [3] as well as the expectation propagation based approach from Hernández-Lobato and Adams [4]): either they did severely underfit the data, or they poorly predicted the uncertainty in regions far from observed data points. The latter behaviour is exemplified in Figure 1 (left) where we regressed the sinc function from 20 observations with a two layer neural network (50 tanh units each) using our implementation of the Bayes by Backprop (BBB) aproach from Blundell et al. [2]. In contrast, a fit of the same data with our method more faithfully represents model uncertainty as depicted in Figure 1 (right).", "title": "" }, { "docid": "9113f66e13fc6d8fce4a0bc8bcea31b6", "text": "Motivated by the observation that coarse and fine resolutions of an image reveal different structures in the underlying visual phenomenon, we present a model based on the Deep Belief Network (DBN) which learns features from the multiscale representation of images. A Laplacian Pyramid is first constructed for each image. DBNs are then trained separately at each level of the pyramid. Finally, a top level RBM combines these DBNs into a single network we call the Multiresolution Deep Belief Network (MrDBN). Experiments show that MrDBNs generalize better than standard DBNs on NORB classification and TIMIT phone recognition. In the domain of generative learning, we demonstrate the superiority of MrDBNs at modeling face images.", "title": "" }, { "docid": "c27eecae33fe87779d3452002c1bdf8a", "text": "When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze. A wealth of information regarding intelligent decision making is conveyed by human gaze allocation; hence, exploiting such information has the potential to improve the agents’ performance. With this motivation, we propose the AGIL (Attention Guided Imitation Learning) framework. We collect high-quality human action and gaze data while playing Atari games in a carefully controlled experimental setting. Using these data, we first train a deep neural network that can predict human gaze positions and visual attention with high accuracy (the gaze network) and then train another network to predict human actions (the policy network). Incorporating the learned attention model from the gaze network into the policy network significantly improves the action prediction accuracy and task performance.", "title": "" }, { "docid": "f818a1cab06c4650a0aa250c076f5f88", "text": "Shannon’s determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimumbandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding/precoding for intersymbol-interference channels.", "title": "" }, { "docid": "baad68c1adef7b72d78745fe03db0c57", "text": "0020-0255/$ see front matter 2012 Elsevier Inc http://dx.doi.org/10.1016/j.ins.2012.10.039 ⇑ Corresponding author. E-mail addresses: pcortez@dsi.uminho.pt (P. Cor In this paper, we propose a new visualization approach based on a Sensitivity Analysis (SA) to extract human understandable knowledge from supervised learning black box data mining models, such as Neural Networks (NNs), Support Vector Machines (SVMs) and ensembles, including Random Forests (RFs). Five SA methods (three of which are purely new) and four measures of input importance (one novel) are presented. Also, the SA approach is adapted to handle discrete variables and to aggregate multiple sensitivity responses. Moreover, several visualizations for the SA results are introduced, such as input pair importance color matrix and variable effect characteristic surface. A wide range of experiments was performed in order to test the SA methods and measures by fitting four well-known models (NN, SVM, RF and decision trees) to synthetic datasets (five regression and five classification tasks). In addition, the visualization capabilities of the SA are demonstrated using four real-world datasets (e.g., bank direct marketing and white wine quality). 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "656e5502d9067dc08d249c122e7d4bb1", "text": "Computer-based data acquisition systems play an important role in clinical monitoring and in the development of new monitoring tools. LabVIEW (National Instruments, Austin, TX) is a data acquisition and programming environment that allows flexible acquisition and processing of analog and digital data. The main feature that distinguishes LabVIEW from other data acquisition programs is its highly modular graphical programming language, “G,” and a large library of mathematical and statistical functions. The advantage of graphical programming is that the code is flexible, reusable, and self-documenting. Subroutines can be saved in a library and reused without modification in other programs. This dramatically reduces development time and enables researchers to develop or modify their own programs. LabVIEW uses a large amount of processing power and computer memory, thus requiring a powerful computer. A large-screen monitor is desirable when developing larger applications. LabVIEW is excellently suited by testing new monitoring paradigms, analysis algorithms, or user interfaces. The typical LabVIEW user is the researcher who wants to develop a new monitoring technique, a set of new (derived) variables by integrating signals from several existing patient monitors, closed-loop control of a physiological variable, or a physiological simulator.", "title": "" }, { "docid": "b252645eaff7a79df8c1ea9873ef08c2", "text": "In this paper, we present a knowledge-based decision system for healthcare. It not only performs intelligent diagnoses but also produces inferential advices for the interrelated diseases involving overweight or obese, diabetes, high blood pressure and high cholesterol conditions. Moreover, it performs deep diagnoses for the pregnant Asian women; for the unknown type of diabetes and for the risk of a heart attack and stroke increases. Also, it generates a risk report to remind patients to pay attention on their health. Our knowledge-based decision system provides the efficient and effective way to take care of the patient's health, to promote the human's quality of life and to provide disease monitoring and control to alleviate or to reduce the medical condition. In the long-term, this system will help us to reduce our medicare investments and to provide high quality healthy lives.", "title": "" }, { "docid": "3b5555c5624fc11bbd24cfb8fff669f0", "text": "Redundancy resolution is a critical problem in the control of robotic manipulators. Recurrent neural networks (RNNs), as inherently parallel processing models for time-sequence processing, are potentially applicable for the motion control of manipulators. However, the development of neural models for high-accuracy and real-time control is a challenging problem. This paper identifies two limitations of the existing RNN solutions for manipulator control, i.e., position error accumulation and the convex restriction on the projection set, and overcomes them by proposing two modified neural network models. Our method allows nonconvex sets for projection operations, and control error does not accumulate over time in the presence of noise. Unlike most works in which RNNs are used to process time sequences, the proposed approach is model-based and training-free, which makes it possible to achieve fast tracking of reference signals with superior robustness and accuracy. Theoretical analysis reveals the global stability of a system under the control of the proposed neural networks. Simulation results confirm the effectiveness of the proposed control method in both the position regulation and tracking control of redundant PUMA 560 manipulators.", "title": "" }, { "docid": "a05b4878404f9127d576d90d6b241588", "text": "This paper presents an air-filled substrate integrated waveguide (AFSIW) filter post-process tuning technique. The emerging high-performance AFSIW technology is of high interest for the design of microwave and millimeter-wave substrate integrated systems based on low-cost multilayer printed circuit board (PCB) process. However, to comply with stringent specifications, especially for space, aeronautical and safety applications, a filter post-process tuning technique is desired. AFSIW single pole filter post-process tuning using a capacitive post is theoretically analyzed. It is demonstrated that a tuning of more than 3% of the resonant frequency is achieved at 21 GHz using a 0.3 mm radius post with a 40% insertion ratio. For experimental demonstration, a fourth-order AFSIW band pass filter operating in the 20.88 to 21.11 GHz band is designed and fabricated. Due to fabrication tolerances, it is shown that its performances are not in line with expected results. Using capacitive post tuning, characteristics are improved and agree with optimized results. This post-process tuning can be used for other types of substrate integrated devices.", "title": "" }, { "docid": "5be572ea448bfe40654956112cecd4e1", "text": "BACKGROUND\nBeta blockers reduce mortality in patients who have chronic heart failure, systolic dysfunction, and are on background treatment with diuretics and angiotensin-converting enzyme inhibitors. We aimed to compare the effects of carvedilol and metoprolol on clinical outcome.\n\n\nMETHODS\nIn a multicentre, double-blind, and randomised parallel group trial, we assigned 1511 patients with chronic heart failure to treatment with carvedilol (target dose 25 mg twice daily) and 1518 to metoprolol (metoprolol tartrate, target dose 50 mg twice daily). Patients were required to have chronic heart failure (NYHA II-IV), previous admission for a cardiovascular reason, an ejection fraction of less than 0.35, and to have been treated optimally with diuretics and angiotensin-converting enzyme inhibitors unless not tolerated. The primary endpoints were all-cause mortality and the composite endpoint of all-cause mortality or all-cause admission. Analysis was done by intention to treat.\n\n\nFINDINGS\nThe mean study duration was 58 months (SD 6). The mean ejection fraction was 0.26 (0.07) and the mean age 62 years (11). The all-cause mortality was 34% (512 of 1511) for carvedilol and 40% (600 of 1518) for metoprolol (hazard ratio 0.83 [95% CI 0.74-0.93], p=0.0017). The reduction of all-cause mortality was consistent across predefined subgroups. The composite endpoint of mortality or all-cause admission occurred in 1116 (74%) of 1511 on carvedilol and in 1160 (76%) of 1518 on metoprolol (0.94 [0.86-1.02], p=0.122). Incidence of side-effects and drug withdrawals did not differ by much between the two study groups.\n\n\nINTERPRETATION\nOur results suggest that carvedilol extends survival compared with metoprolol.", "title": "" }, { "docid": "84cf9fab65ac25aadd69ee3e8af97aef", "text": "This final article of the four part series on the current concepts of tooth wear will provide the reader with an evaluation of the data available in the contemporary literature with regards to the survival analysis of differing restorative materials, and their respective methods of application to treat tooth wear. It is vital that the dental operator is familiar with the role of differing materials which may be used to restore the worn dentition, some of which may prove to be more suitable for the management of particular patterns of tooth wear than others. The active management of tooth wear unfortunately commits the patient to a lifelong need for considerable maintenance, and it is imperative that this is understood from the outset.", "title": "" } ]
scidocsrr
008618378b3f79fc1f953e0ffdb59367
Exploring Chinese users' acceptance of instant messaging using the theory of planned behavior, the technology acceptance model, and the flow theory
[ { "docid": "bd13f54cd08fe2626fe8de4edce49197", "text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "13452d0ceb4dfd059f1b48dba6bf5468", "text": "This paper presents an extension to the technology acceptance model (TAM) and empirically examines it in an enterprise resource planning (ERP) implementation environment. The study evaluated the impact of one belief construct (shared beliefs in the benefits of a technology) and two widely recognized technology implementation success factors (training and communication) on the perceived usefulness and perceived ease of use during technology implementation. Shared beliefs refer to the beliefs that organizational participants share with their peers and superiors on the benefits of the ERP system. Using data gathered from the implementation of an ERP system, we showed that both training and project communication influence the shared beliefs that users form about the benefits of the technology and that the shared beliefs influence the perceived usefulness and ease of use of the technology. Thus, we provided empirical and theoretical support for the use of managerial interventions, such as training and communication, to influence the acceptance of technology, since perceived usefulness and ease of use contribute to behavioral intention to use the technology. # 2003 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "4f3d2b869322125a8fad8a39726c99f8", "text": "Routing Protocol for Low Power and Lossy Networks (RPL) is the routing protocol for IoT and Wireless Sensor Networks. RPL is a lightweight protocol, having good routing functionality, but has basic security functionality. This may make RPL vulnerable to various attacks. Providing security to IoT networks is challenging, due to their constrained nature and connectivity to the unsecured internet. This survey presents the elaborated review on the security of Routing Protocol for Low Power and Lossy Networks (RPL). This survey is built upon the previous work on RPL security and adapts to the security issues and constraints specific to Internet of Things. An approach to classifying RPL attacks is made based on Confidentiality, Integrity, and Availability. Along with that, we surveyed existing solutions to attacks which are evaluated and given possible solutions (theoretically, from various literature) to the attacks which are not yet evaluated. We further conclude with open research challenges and future work needs to be done in order to secure RPL for Internet of Things (IoT).", "title": "" }, { "docid": "f77d2ea4202bb6d13efc0480f7890b2e", "text": "The impending end of Moore’s Law has started a rethinking of the way computers are built and computation is done. This paper discusses two directions that are currently attracting much attention as future computation paradigms: the merging of logic and memory, and brain-inspired computing. Natural computing has been known for its innovative methods to conduct computation, and as such may play an important role in the shaping of the post-Moore era.", "title": "" }, { "docid": "47929b2ff4aa29bf115a6728173feed7", "text": "This paper presents a metaobject protocol (MOP) for C++. This MOP was designed to bring the power of meta-programming to C++ programmers. It avoids penalties on runtime performance by adopting a new meta-architecture in which the metaobjects control the compilation of programs instead of being active during program execution. This allows the MOP to be used to implement libraries of efficient, transparent language extensions.", "title": "" }, { "docid": "7cb6582bf81aea75818eef2637c95c79", "text": "Although multi-frame super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified, or important factors such as blur kernel and noise level are assumed to be known. Such models cannot deal with the scene and imaging conditions that vary from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level while reconstructing the original high-res frames. As a result, our system not only produces very promising super resolution results that outperform the state of the art, but also adapts to a variety of noise levels and blur kernels. Theoretical analysis of the relationship between blur kernel, noise level and frequency-wise reconstruction rate is also provided, consistent with our experimental results.", "title": "" }, { "docid": "4509268638539ec5f1e9a521fce3be02", "text": "The Internet and the World Wide Web provide a way to store and share information, especially in academic fields. Community-based research paper sharing systems, such as CiteULike, have become popular among researchers. This paper proposes a framework for a tag-based research paper recommender system. The proposed approach exploits the use of sets of tags for recommending research papers to each user. The preliminary evaluation shows that user self-defined tags could be used as a profile for each individual user. This recommender system demonstrated an encouraging preliminary result with the overall accuracy percentage up to 91.66%.", "title": "" }, { "docid": "4c6c3b1b951bc472f0ccc6ce92091f70", "text": "Tendon disorders are common and lead to significant disability, pain, healthcare cost, and lost productivity. A wide range of injury mechanisms exist leading to tendinopathy or tendon rupture. Tears can occur in healthy tendons that are acutely overloaded (e.g., during a high speed or high impact event) or lacerated (e.g., a knife injury). Tendinitis or tendinosis can occur in tendons exposed to overuse conditions (e.g., an elite swimmer's training regimen) or intrinsic tissue degeneration (e.g., age-related degeneration). The healing potential of a torn or pathologic tendon varies depending on anatomic location (e.g., Achilles vs. rotator cuff) and local environment (e.g., intrasynovial vs. extrasynovial). Although healing occurs to varying degrees, in general healing of repaired tendons follows the typical wound healing course, including an early inflammatory phase, followed by proliferative and remodeling phases. Numerous treatment approaches have been attempted to improve tendon healing, including growth factor- and cell-based therapies and rehabilitation protocols. This review will describe the current state of knowledge of injury and repair of the three most common tendinopathies--flexor tendon lacerations, Achilles tendon rupture, and rotator cuff disorders--with a particular focus on the use of animal models for understanding tendon healing.", "title": "" }, { "docid": "d2b5f28a7f32de167ec4c907472af90b", "text": "Brain-computer interfacing (BCI) is a steadily growing area of research. While initially BCI research was focused on applications for paralyzed patients, increasingly more alternative applications in healthy human subjects are proposed and investigated. In particular, monitoring of mental states and decoding of covert user states have seen a strong rise of interest. Here, we present some examples of such novel applications which provide evidence for the promising potential of BCI technology for non-medical uses. Furthermore, we discuss distinct methodological improvements required to bring non-medical applications of BCI technology to a diversity of layperson target groups, e.g., ease of use, minimal training, general usability, short control latencies.", "title": "" }, { "docid": "86c481ed5b7e57230a244676d315ca6c", "text": "Flocculation harvesting of the fucoxanthin-rich marine microalga Isochrysis galbana has received little attention. Therefore, we attempted to screen for an optimal chemical flocculant and optimize flocculation conditions from five chemical flocculants—ferric chloride (FC), aluminum sulfate (AS), polyaluminum chloride (PAC), aluminum potassium sulfate (APS), and zinc sulfate (ZS)—for effective flocculation of I. galbana. The growth rate, photosynthetic performance, and fucoxanthin content were determined in re-suspended flocculated algal cells and in the flocculation supernatant cultured algal cells. The results showed that high growth rate and fucoxanthin accumulation were observed when FC was used as the flocculant in I. galbana cultures, which indicated that FC may cause less harm to I. galbana than the other aluminum-based flocculants. Furthermore, satisfactory flocculation efficiency was also observed when FC was used to flocculate I. galbana, and the FC dosage was less than that required for flocculation of I. galbana using PAC, APS, and AS. Thus, we selected FC as the optimal flocculant for harvesting I. galbana based on its flocculation efficiency together with algal physiological performance, growth rate, and fucoxanthin content.", "title": "" }, { "docid": "c5cde43ff2a3f825a7e077a1d9d8d4e8", "text": "Research on sensor-based activity recognition has, recently, made significant progress and is attracting growing attention in a number of disciplines and application domains. However, there is a lack of high-level overview on this topic that can inform related communities of the research state of the art. In this paper, we present a comprehensive survey to examine the development and current status of various aspects of sensor-based activity recognition. We first discuss the general rationale and distinctions of vision-based and sensor-based activity recognition. Then, we review the major approaches and methods associated with sensor-based activity monitoring, modeling, and recognition from which strengths and weaknesses of those approaches are highlighted. We make a primary distinction in this paper between data-driven and knowledge-driven approaches, and use this distinction to structure our survey. We also discuss some promising directions for future research.", "title": "" }, { "docid": "0fddd08dfdf2c545381b5a7580ba717d", "text": "Deep neural networks (DNNs) trained on large-scale datasets have recently achieved impressive improvements in face recognition. But a persistent challenge remains to develop methods capable of handling large pose variations that are relatively under-represented in training data. This paper presents a method for learning a feature representation that is invariant to pose, without requiring extensive pose coverage in training data. We first propose to use a synthesis network for generating non-frontal views from a single frontal image, in order to increase the diversity of training data while preserving accurate facial details that are critical for identity discrimination. Our next contribution is a multi-source multi-task DNN that seeks a rich embedding representing identity information, as well as information such as pose and landmark locations. Finally, we propose a Siamese network to explicitly disentangle identity and pose, by demanding alignment between the feature reconstructions through various combinations of identity and pose features obtained from two images of the same subject. Experiments on face datasets in both controlled and wild scenarios, such as MultiPIE, LFW and 300WLP, show that our method consistently outperforms the state-of-the-art, especially on images with large head pose variations.", "title": "" }, { "docid": "1f0dbec4f21549780d25aa81401494c6", "text": "Parallel scientific applications require high-performanc e I/O support from underlying file systems. A comprehensive understanding of the expected workload is t herefore essential for the design of high-performance parallel file systems. We re-examine the w orkload characteristics in parallel computing environments in the light of recent technology ad vances and new applications. We analyze application traces from a cluster with hundreds o f nodes. On average, each application has only one or two typical request sizes. Large requests fro m several hundred kilobytes to several megabytes are very common. Although in some applications, s mall requests account for more than 90% of all requests, almost all of the I/O data are transferre d by large requests. All of these applications show bursty access patterns. More than 65% of write req uests have inter-arrival times within one millisecond in most applications. By running the same be nchmark on different file models, we also find that the write throughput of using an individual out p t file for each node exceeds that of using a shared file for all nodes by a factor of 5. This indicate s that current file systems are not well optimized for file sharing.", "title": "" }, { "docid": "8caa44dc9d57b91c3455b66b152c131b", "text": "Prediction of protein function is of significance in studying biological processes. One approach for function prediction is to classify a protein into functional family. Support vector machine (SVM) is a useful method for such classification, which may involve proteins with diverse sequence distribution. We have developed a web-based software, SVMProt, for SVM classification of a protein into functional family from its primary sequence. SVMProt classification system is trained from representative proteins of a number of functional families and seed proteins of Pfam curated protein families. It currently covers 54 functional families and additional families will be added in the near future. The computed accuracy for protein family classification is found to be in the range of 69.1-99.6%. SVMProt shows a certain degree of capability for the classification of distantly related proteins and homologous proteins of different function and thus may be used as a protein function prediction tool that complements sequence alignment methods. SVMProt can be accessed at http://jing.cz3.nus.edu.sg/cgi-bin/svmprot.cgi.", "title": "" }, { "docid": "29e1ecb7b1dfbf4ca2a229726dcab12e", "text": "The recently developed depth sensors, e.g., the Kinect sensor, have provided new opportunities for human-computer interaction (HCI). Although great progress has been made by leveraging the Kinect sensor, e.g., in human body tracking, face recognition and human action recognition, robust hand gesture recognition remains an open problem. Compared to the entire human body, the hand is a smaller object with more complex articulations and more easily affected by segmentation errors. It is thus a very challenging problem to recognize hand gestures. This paper focuses on building a robust part-based hand gesture recognition system using Kinect sensor. To handle the noisy hand shapes obtained from the Kinect sensor, we propose a novel distance metric, Finger-Earth Mover's Distance (FEMD), to measure the dissimilarity between hand shapes. As it only matches the finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences. The extensive experiments demonstrate that our hand gesture recognition system is accurate (a 93.2% mean accuracy on a challenging 10-gesture dataset), efficient (average 0.0750 s per frame), robust to hand articulations, distortions and orientation or scale changes, and can work in uncontrolled environments (cluttered backgrounds and lighting conditions). The superiority of our system is further demonstrated in two real-life HCI applications.", "title": "" }, { "docid": "3dab0441ca1e4fb39296be8006611690", "text": "A content-based personalized recommendation system learns user specific profiles from user feedback so that it can deliver information tailored to each individual user's interest. A system serving millions of users can learn a better user profile for a new user, or a user with little feedback, by borrowing information from other users through the use of a Bayesian hierarchical model. Learning the model parameters to optimize the joint data likelihood from millions of users is very computationally expensive. The commonly used EM algorithm converges very slowly due to the sparseness of the data in IR applications. This paper proposes a new fast learning technique to learn a large number of individual user profiles. The efficacy and efficiency of the proposed algorithm are justified by theory and demonstrated on actual user data from Netflix and MovieLens.", "title": "" }, { "docid": "efd79ed4f8fba97f0ee4a2774f40da6a", "text": "This paper presents a new algorithm for the extrinsic calibration of a perspective camera and an invisible 2D laser-rangefinder (LRF). The calibration is achieved by freely moving a checkerboard pattern in order to obtain plane poses in camera coordinates and depth readings in the LRF reference frame. The problem of estimating the rigid displacement between the two sensors is formulated as one of registering a set of planes and lines in the 3D space. It is proven for the first time that the alignment of three plane-line correspondences has at most eight solutions that can be determined by solving a standard p3p problem and a linear system of equations. This leads to a minimal closed-form solution for the extrinsic calibration that can be used as hypothesis generator in a RANSAC paradigm. Our calibration approach is validated through simulation and real experiments that show the superiority with respect to the current state-of-the-art method requiring a minimum of five input planes.", "title": "" }, { "docid": "6eff790c76e7eb7016eef6d306a0dde0", "text": "To cite: Rozenblum R, Bates DW. BMJ Qual Saf 2013;22:183–186. Patients are central to healthcare delivery, yet all too often their perspectives and input have not been considered by providers. 2 This is beginning to change rapidly and is having a major impact across a range of dimensions. Patients are becoming more engaged in their care and patient-centred healthcare has emerged as a major domain of quality. At the same time, social media in particular and the internet more broadly are widely recognised as having produced huge effects across societies. For example, few would have predicted the Arab Spring, yet it was clearly enabled by media such as Facebook and Twitter. Now these technologies are beginning to pervade the healthcare space, just as they have so many others. But what will their effects be? These three domains—patient-centred healthcare, social media and the internet— are beginning to come together, with powerful and unpredictable consequences. We believe that they have the potential to create a major shift in how patients and healthcare organisations connect, in effect, the ‘perfect storm’, a phrase that has been used to describe a situation in which a rare combination of circumstances result in an event of unusual magnitude creating the potential for non-linear change. Historically, patients have paid relatively little attention to quality, safety and the experiences large groups of other patients have had, and have made choices about where to get healthcare based largely on factors like reputation, the recommendations of a friend or proximity. Part of the reason for this was that information about quality or the opinions of others about their care was hard to access before the internet. Today, patients appear to be becoming more engaged with their care in general, and one of the many results is that they are increasingly using the internet to share and rate their experiences of health care. They are also using the internet to connect with others having similar illnesses, to share experiences, and beginning to manage their illnesses by leveraging these technologies. While it is not yet clear what impact patients’ use of the internet and social media will have on healthcare, they will definitely have a major effect. Healthcare organisations have generally been laggards in this space—they need to start thinking about how they will use the internet in a variety of ways, with specific examples being leveraging the growing number of patients that are using the internet to describe their experiences of healthcare and how they can incorporate patient’s feedback via the internet into the organisational quality improvement process.", "title": "" }, { "docid": "2b6f95a75b116150311153fe0e55c11a", "text": "Gene–gene interactions (GGIs) are important markers for determining susceptibility to a disease. Multifactor dimensionality reduction (MDR) is a popular algorithm for detecting GGIs and primarily adopts the correct classification rate (CCR) to assess the quality of a GGI. However, CCR measurement alone may not successfully detect certain GGIs because of potential model preferences and disease complexities. In this study, multiple-criteria decision analysis (MCDA) based on MDR was named MCDA-MDR and proposed for detecting GGIs. MCDA facilitates MDR to simultaneously adopt multiple measures within the two-way contingency table of MDR to assess GGIs; the CCR and rule utility measure were employed. Cross-validation consistency was adopted to determine the most favorable GGIs among the Pareto sets. Simulation studies were conducted to compare the detection success rates of the MDR-only-based measure and MCDA-MDR, revealing that MCDA-MDR had superior detection success rates. The Wellcome Trust Case Control Consortium dataset was analyzed using MCDA-MDR to detect GGIs associated with coronary artery disease, and MCDA-MDR successfully detected numerous significant GGIs (p < 0.001). MCDA-MDR performance assessment revealed that the applied MCDA successfully enhanced the GGI detection success rate of the MDR-based method compared with MDR alone.", "title": "" }, { "docid": "1982a3809a6322a1c07f004babbc09b2", "text": "Accuracy is one of the basic principles of journalism. However, it is increasingly hard to manage due to the diversity of news media. Some editors of online news tend to use catchy headlines which trick readers into clicking. These headlines are either ambiguous or misleading, degrading the reading experience of the audience. Thus, identifying inaccurate news headlines is a task worth studying. Previous work names these headlines “clickbaits” and mainly focus on the features extracted from the headlines, which limits the performance since the consistency between headlines and news bodies is underappreciated. In this paper, we clearly redefine the problem and identify ambiguous and misleading headlines separately. We utilize class sequential rules to exploit structure information when detecting ambiguous headlines. For the identification of misleading headlines, we extract features based on the congruence between headlines and bodies. To make use of the large unlabeled data set, we apply a co-training method and gain an increase in performance. The experiment results show the effectiveness of our methods. Then we use our classifiers to detect inaccurate headlines crawled from different sources and conduct a data analysis.", "title": "" }, { "docid": "7b6c039783091260cee03704ce9748d8", "text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.", "title": "" }, { "docid": "274a9094764edd249f1682fbca93a866", "text": "Visual saliency detection is a challenging problem in computer vision, but one of great importance and numerous applications. In this paper, we propose a novel model for bottom-up saliency within the Bayesian framework by exploiting low and mid level cues. In contrast to most existing methods that operate directly on low level cues, we propose an algorithm in which a coarse saliency region is first obtained via a convex hull of interest points. We also analyze the saliency information with mid level visual cues via superpixels. We present a Laplacian sparse subspace clustering method to group superpixels with local features, and analyze the results with respect to the coarse saliency region to compute the prior saliency map. We use the low level visual cues based on the convex hull to compute the observation likelihood, thereby facilitating inference of Bayesian saliency at each pixel. Extensive experiments on a large data set show that our Bayesian saliency model performs favorably against the state-of-the-art algorithms.", "title": "" } ]
scidocsrr
838b615ea252ff95ad8272895de64d58
Software-Defined Multi-cloud Computing: A Vision, Architectural Elements, and Future Directions
[ { "docid": "5a85db36e049c371f0b0e689e7e73d4a", "text": "Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm.While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore the systems-level challenges in achieving scalable, fault-tolerant quantum computation. In this lecture,we provide an engineering-oriented introduction to quantum computation with an overview of the theory behind key quantum algorithms. Next, we look at architectural case studies based upon experimental data and future projections for quantum computation implemented using trapped ions. While we focus here on architectures targeted for realization using trapped ions, the techniques for quantum computer architecture design, quantum fault-tolerance, and compilation described in this lecture are applicable to many other physical technologies that may be viable candidates for building a large-scale quantum computing system. We also discuss general issues involved with programming a quantum computer as well as a discussion of work on quantum architectures based on quantum teleportation. Finally, we consider some of the open issues remaining in the design of quantum computers.", "title": "" }, { "docid": "6fd511ffcdb44c39ecad1a9f71a592cc", "text": "s Providing Supporting Policy Compositional Operators Functional Composition Network Layered Abstract Topologies Topological Decomposition Packet Extensible Headers Policy & Network Abstractions Pyretic (Contributions)", "title": "" } ]
[ { "docid": "13d9b338b83a5fcf75f74607bf7428a7", "text": "We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies, including both linear and nonlinear ones. We implement the D-NTM with both continuous and discrete read and write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU controller. We provide extensive analysis of our model and compare different variations of neural Turing machines on this task. We show that our model outperforms long short-term memory and NTM variants. We provide further experimental results on the sequential MNIST, Stanford Natural Language Inference, associative recall, and copy tasks.", "title": "" }, { "docid": "2630e22fb604a0657aef4c7d8e56a89f", "text": "Social media has recently gained tremendous fame as a highly impactful channel of communication in these modern times of digitized living. It has been put on a pedestal across varied streams for facilitating participatory interaction amongst businesses, groups, societies, organizations, consumers, communities, forums, and the like. This subject has received increased attention in the literature with many of its practical applications including social media marketing (SMM) being elaborated, analysed, and recorded by many studies. This study is aimed at collating the existing research on SMM to present a review of seventy one articles that will bring together the many facets of this rapidly blooming media marketing form. The surfacing limitations in the literature on social media have also been identified and potential research directions have been offered.", "title": "" }, { "docid": "04b62ed72ddf8f97b9cb8b4e59a279c1", "text": "This paper aims to explore some of the manifold and changing links that official Pakistani state discourses forged between women and work from the 1940s to the late 2000s. The focus of the analysis is on discursive spaces that have been created for women engaged in non-domestic work. Starting from an interpretation of the existing academic literature, this paper argues that Pakistani women’s non-domestic work has been conceptualised in three major ways: as a contribution to national development, as a danger to the nation, and as non-existent. The paper concludes that although some conceptualisations of work have been more powerful than others and, at specific historical junctures, have become part of concrete state policies, alternative conceptualisations have always existed alongside them. Disclosing the state’s implication in the discursive construction of working women’s identities might contribute to the destabilisation of hegemonic concepts of gendered divisions of labour in Pakistan. DOI: https://doi.org/10.1016/j.wsif.2013.05.007 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-78605 Accepted Version Originally published at: Grünenfelder, Julia (2013). Discourses of gender identities and gender roles in Pakistan: Women and non-domestic work in political representations. Women’s Studies International Forum, 40:68-77. DOI: https://doi.org/10.1016/j.wsif.2013.05.007", "title": "" }, { "docid": "28fa7f0db97f865860bbf19b84b22937", "text": "This paper presents an in-store e-commerce system that provides shopping assistance and personalized advertising through the use of a new concept in context aware computing, dynamic contextualization. This system, PromoPad, utilizes augmented reality technologies on a hand-held Tablet PC to provide for dynamic modification of the contextual settings of products on store shelves through the use of see-through vision with augmentations. This real-time modification of the perception of context, dynamic contextualization, moves beyond the traditional concept of contextaware computing into context modification. The technical requirements for realizing dynamic contextualization using augmented reality technologies are described in detail. The target design of the PromoPad is a consumer friendly shopping assistant that requires minimum user effort and is practical in a public environment such as a shopping mall or a grocery store.", "title": "" }, { "docid": "9852ef6f1d5df6ca1cee8aebef2f5b78", "text": "A broadband coplanar waveguide (CPW) fed bow-tie slot antenna is proposed. By using a linear tapered transition, a 37% impedance bandwidth at -10 dB return loss is achieved. The antenna structure is very simple and the radiation patterns of the antenna in the whole bandwidth remain stable; moreover, the cross-polarization level is lower. An antenna model is fabricated on a high dielectric constant substrate. Experiments show that the simulated results agree well with the measured ones.", "title": "" }, { "docid": "89eafb08086a41497ff8a42664928577", "text": "Youth Unemployment in Korea: From a German and Transitional Labour Market Point of View By conventional statistics, youth unemployment seems to be quite moderate in Korea: ‘only’ 9.6 percent of the ‘active’ youth labour force was unemployed compared to 21.4 percent in EU-27 in 2011. Germany, with a youth unemployment rate of 8.5 percent, is one of the very few European countries outperforming Korea. But the Korean case is in one respect unusual. From the perspective of intergenerational risk sharing Korea’s youth unemployment rate is 4.6 times higher than the unemployment rate of adults aged 45 to 54; in Germany, this figure is only 1.7. Further peculiarities come up if unemployment is measured by the number of youth not in employment, education or training (NEET) in percent of the total youth population. Korea’s NEET figures are at the top in OECD countries, especially for youth with tertiary education. This paper throws some light to explain this conundrum: It sketches, first, the main causes of youth unemployment and the general policy interventions; because a large part of the problem is structural, possible immediate measures to avoid long-term scar effects for the unemployed youth are briefly reviewed; differences between Europe and the United States show in particular the importance of automatic stabilizers like unemployment insurance in order to reduce the pressure on unfavourable risk sharing for youth in times of recession. The main part is devoted to possible lessons for Korea from Europe, in particular from Germany. Dual education and vocational training systems that emphasise middle level and market oriented skills are identified as institutional device both for fairer intergenerational risk sharing as well as for a smoother transition from school to work. In its outlook, the paper comes back to the puzzle of highly and academically inflated youth unemployment by referring to a possible hidden cause in Korea: A strong insurance motive might explain the overall striving for an academic degree inducing not only wasteful congestion at labour market entries but also unfair job allocation through credentialism. JEL Classification: E24, I24, J64", "title": "" }, { "docid": "0382ad43b6d31a347d9826194a7261ce", "text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.", "title": "" }, { "docid": "7928ad4d18e3f3eaaf95fa0b49efafa0", "text": "Associative classifiers have been proposed to achieve an accurate model with each individual rule being interpretable. However, existing associative classifiers often consist of a large number of rules and, thus, can be difficult to interpret. We show that associative classifiers consisting of an ordered rule set can be represented as a tree model. From this view, it is clear that these classifiers are restricted in that at least one child node of a non-leaf node is never split. We propose a new tree model, i.e., condition-based tree (CBT), to relax the restriction. Furthermore, we also propose an algorithm to transform a CBT to an ordered rule set with concise rule conditions. This ordered rule set is referred to as a condition-based classifier (CBC). Thus, the interpretability of an associative classifier is maintained, but more expressive models are possible. The rule transformation algorithm can be also applied to regular binary decision trees to extract an ordered set of rules with simple This research was partially supported by ONR grant N00014-09-1-0656. Email addresses: hdeng3@asu.com (Houtao Deng), george.runger@asu.edu (George Runger), eugene.tuv@intel.com (Eugene Tuv), wade.bannister@ingenixconsulting.com (Wade Bannister) Preprint submitted to Elsevier November 17, 2013 rule conditions. Feature selection is applied to a binary representation of conditions to simplify/improve the models further. Experimental studies show that CBC has competitive accuracy performance, and has a significantly smaller number of rules (median of 10 rules per data set) than well-known associative classifiers such as CBA (median of 47) and GARC (median of 21). CBC with feature selection has even a smaller number of rules.", "title": "" }, { "docid": "7e85b8528370f2c0f1427b2d4ce30bf6", "text": "This paper deals with a new challenge for digital forensic experts - the forensic analysis of social networks. There is a lot of identity theft, theft of personal data, public defamation, cyber stalking and other criminal activities on social network sites. This paper will present a forensic analysis of social networks and cloud forensics in the internet environment. For the purpose of this research one case study is created like - a common practical scenario where the combination of identity theft and public defamation through Facebook activity is explored. Investigators must find the person who stole some others profile, who publish inappropriate and prohibited contents performing act of public defamation and humiliation of profile owner.", "title": "" }, { "docid": "e93eaa695003cb409957e5c7ed19bf2a", "text": "Prominent research argues that consumers often use personal budgets to manage self-control problems. This paper analyzes the link between budgeting and selfcontrol problems in consumption-saving decisions. It shows that the use of goodspecific budgets depends on the combination of a demand for commitment and the demand for flexibility resulting from uncertainty about intratemporal trade-offs between goods. It explains the subtle mechanism which renders budgets useful commitments, their interaction with minimum-savings rules (another widely-studied form of commitment), and how budgeting depends on the intensity of self-control problems. This theory matches several empirical findings on personal budgeting. JEL CLASSIFICATION: D23, D82, D86, D91, E62, G31", "title": "" }, { "docid": "ab0b8cea87678dd7b5ea5057fbdb0ac1", "text": "Data collection is a crucial operation in wireless sensor networks. The design of data collection schemes is challenging due to the limited energy supply and the hot spot problem. Leveraging empirical observations that sensory data possess strong spatiotemporal compressibility, this paper proposes a novel compressive data collection scheme for wireless sensor networks. We adopt a power-law decaying data model verified by real data sets and then propose a random projection-based estimation algorithm for this data model. Our scheme requires fewer compressed measurements, thus greatly reduces the energy consumption. It allows simple routing strategy without much computation and control overheads, which leads to strong robustness in practical applications. Analytically, we prove that it achieves the optimal estimation error bound. Evaluations on real data sets (from the GreenOrbs, IntelLab and NBDC-CTD projects) show that compared with existing approaches, this new scheme prolongs the network lifetime by 1.5X to 2X for estimation error 5-20 percent.", "title": "" }, { "docid": "add715f513c66cfa8799358c19390596", "text": "demonstrate the possible adaptability of this system to Arabic voice recognition.", "title": "" }, { "docid": "034cbb44f79573ef524d017401a788b7", "text": "Smoothly shaded color ramps, important for presentation graphics and computer imaging, are difficult for color managed systems utilizing device profiles. The causes for disruptive artifacts such as contours and banding are examined and a set of visual limits are established that, if met will avoid them. Based on these visual thresholds, an analysis of the numeric representation and processing of color data in an ICC profile-based environment yields some requirements for device profiles and their use. Specifically, we find that 8-bit precision and inadequate table indexing resolution cause contour artifacts. Banding is caused indirectly from the inversion of noisy and nonlinear printer color data when the profile was created. The noise is not instrument noise, but rather due to the inconsistency of printer output. Some proposals for improving this stage of profile making are suggested. Examples are provided to illustrate the sources of difficulties in rendering smooth and uniform color ramps.", "title": "" }, { "docid": "e700afa9064ef35f7d7de40779326cb0", "text": "Human activity recognition is important for many applications. This paper describes a human activity recognition framework based on feature selection techniques. The objective is to identify the most important features to recognize human activities. We first design a set of new features (called physical features) based on the physical parameters of human motion to augment the commonly used statistical features. To systematically analyze the impact of the physical features on the performance of the recognition system, a single-layer feature selection framework is developed. Experimental results indicate that physical features are always among the top features selected by different feature selection methods and the recognition accuracy is generally improved to 90%, or 8% better than when only statistical features are used. Moreover, we show that the performance is further improved by 3.8% by extending the single-layer framework to a multi-layer framework which takes advantage of the inherent structure of human activities and performs feature selection and classification in a hierarchical manner.", "title": "" }, { "docid": "0a7e755387f037cab0a51472763e620f", "text": "Introduction: Nowadays, one of the most important questions in teaching and learning involves increasing the degree of students’ engagement in learning. According to Astin’s Theory of Student engagement, the best learning environment is one in which it is possible to increase students’ engagement. The current study investigates the influences that using these networks for educational purposes may have on learners’ engagement, motivation, and learning.", "title": "" }, { "docid": "f06de504c2c2663b436a4696e010159e", "text": " Abstract—The theme of the paper is to design and implement the firing circuit for a converter. The necessity of getting synchronized firing pulses for the gate of the thyristor is discussed. Out of many variety of firing circuits available, the ideas behind are the two most popularly used control circuits that are namely using ramp signal and using cosine signal. It shows how a cosine controls scheme work. Detail description and functioning of each block is explained along with the waveforms at the output of the blocks. Experimental results obtained from oscillographic displays at important points of the circuits are included. In this paper, we fabricate a hardware circuit which implements the cosine control technique, test the circuit and also check that desired gate pulses for the thyristors.", "title": "" }, { "docid": "a9fba1188b97a2097702ff900f35d4d9", "text": "One of the beauties of use cases is their accessible, informal format. Use cases are easy to write, and the graphical notation is trivial. Because of their simplicity, use cases are not intimidating, even for teams that have little experience with formal requirements specification and management. However, the simplicity can be deceptive; writing good use cases takes some skill and practice. Many groups writing use cases for the first time run into similar kinds of problems. This paper presents the author's \"Top Ten\" list of use case pitfalls and problems, based on observations from a number of real projects. The paper outlines the symptoms of the problems, and recommends pragmatic cures for each. Examples are provided to illustrate the problems and their solutions.", "title": "" }, { "docid": "32b12ea15bea5eef932f2bfd97db7120", "text": "By studying capabilities inherent in the nerve proper and carefully considering patient complaints and limitations, the surgeon-therapist team may be able to guide patients through a restorative phase via nerve gliding techniques. Nerve symptoms must be heeded when employing rehabilitation techniques. Rather than encouraging the patient to push beyond nerve pain either proximally or distally, the patient is instructed to perform exercises in positions that enhance nerve gliding in a slow, controlled manner. \"Tincture of time\" is prescribed as the patient advances to a less symptomatic level of function.", "title": "" }, { "docid": "13cbca0e2780a95c1e9d4928dc9d236c", "text": "Matching user accounts can help us build better users’ profiles and benefit many applications. It has attracted much attention from both industry and academia. Most of existing works are mainly based on rich user profile attributes. However, in many cases, user profile attributes are unavailable, incomplete or unreliable, either due to the privacy settings or just because users decline to share their information. This makes the existing schemes quite fragile. Users often share their activities on different social networks. This provides an opportunity to overcome the above problem. We aim to address the problem of user identification based on User Generated Content (UGC). We first formulate the problem of user identification based on UGCs and then propose a UGC-based user identification model. A supervised machine learning based solution is presented. It has three steps: firstly, we propose several algorithms to measure the spatial similarity, temporal similarity and content similarity of two UGCs; secondly, we extract the spatial, temporal and content features to exploit these similarities; afterwards, we employ the machine learning method to match user accounts, and conduct the experiments on three ground truth datasets. The results show that the proposed method has given excellent performance with F1 values reaching 89.79%, 86.78% and 86.24% on three ground truth datasets, respectively. This work presents the possibility of matching user accounts with high accessible online data. © 2018 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
ea39ec535feb293bb26319c02eb5b7f0
25 Challenges of Semantic Process Modeling
[ { "docid": "79c2623b0e1b51a216fffbc6bbecd9ec", "text": "Visual notations form an integral part of the language of software engineering (SE). Yet historically, SE researchers and notation designers have ignored or undervalued issues of visual representation. In evaluating and comparing notations, details of visual syntax are rarely discussed. In designing notations, the majority of effort is spent on semantics, with graphical conventions largely an afterthought. Typically, no design rationale, scientific or otherwise, is provided for visual representation choices. While SE has developed mature methods for evaluating and designing semantics, it lacks equivalent methods for visual syntax. This paper defines a set of principles for designing cognitively effective visual notations: ones that are optimized for human communication and problem solving. Together these form a design theory, called the Physics of Notations as it focuses on the physical (perceptual) properties of notations rather than their logical (semantic) properties. The principles were synthesized from theory and empirical evidence from a wide range of fields and rest on an explicit theory of how visual notations communicate. They can be used to evaluate, compare, and improve existing visual notations as well as to construct new ones. The paper identifies serious design flaws in some of the leading SE notations, together with practical suggestions for improving them. It also showcases some examples of visual notation design excellence from SE and other fields.", "title": "" } ]
[ { "docid": "d11c2dd512f680e79706f73d4cd3d0aa", "text": "We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented in terms of a lowrank matrix, and the rank constraint can be relaxed so as to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layerwise manner. Empirically, we find that CCNNs achieve competitive or better performance than CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.", "title": "" }, { "docid": "c8e5257c2ed0023dc10786a3071c6e6a", "text": "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.", "title": "" }, { "docid": "8c4c15f8506adb0bac70b3d8ee012a7c", "text": "Low-rank matrix completion is the problem where one tries to recover a low-rank matrix from noisy observations of a subset of its entries. In this paper, we propose RMC, a new method to deal with the problem of robust low-rank matrix completion, i.e., matrix completion where a fraction of the observed entries are corrupted by non-Gaussian noise, typically outliers. The method relies on the idea of smoothing the `1 norm and using Riemannian optimization to deal with the low-rank constraint. We first state the algorithms as the successive minimization of smooth approximations of the `1 norm and we analyze its convergence by showing the strict decrease of the objective function. We then perform numerical experiments on synthetic data and demonstrate the effectiveness on the proposed method on the Netflix dataset.", "title": "" }, { "docid": "5c8ed4f3831ce864cbdaea07171b5a57", "text": "Hyper-beta-alaninemia is a rare metabolic condition that results in elevated plasma and urinary β-alanine levels and is characterized by neurotoxicity, hypotonia, and respiratory distress. It has been proposed that at least some of the symptoms are caused by oxidative stress; however, only limited information is available on the mechanism of reactive oxygen species generation. The present study examines the hypothesis that β-alanine reduces cellular levels of taurine, which are required for normal respiratory chain function; cellular taurine depletion is known to reduce respiratory function and elevate mitochondrial superoxide generation. To test the taurine hypothesis, isolated neonatal rat cardiomyocytes and mouse embryonic fibroblasts were incubated with medium lacking or containing β-alanine. β-alanine treatment led to mitochondrial superoxide accumulation in conjunction with a decrease in oxygen consumption. The defect in β-alanine-mediated respiratory function was detected in permeabilized cells exposed to glutamate/malate but not in cells utilizing succinate, suggesting that β-alanine leads to impaired complex I activity. Taurine treatment limited mitochondrial superoxide generation, supporting a role for taurine in maintaining complex I activity. Also affected by taurine is mitochondrial morphology, as β-alanine-treated fibroblasts undergo fragmentation, a sign of unhealthy mitochondria that is reversed by taurine treatment. If left unaltered, β-alanine-treated fibroblasts also undergo mitochondrial apoptosis, as evidenced by activation of caspases 3 and 9 and the initiation of the mitochondrial permeability transition. Together, these data show that β-alanine mediates changes that reduce ATP generation and enhance oxidative stress, factors that contribute to heart failure.", "title": "" }, { "docid": "06b6f659fe422410d65081735ad2d16a", "text": "BACKGROUND\nImproving survival and extending the longevity of life for all populations requires timely, robust evidence on local mortality levels and trends. The Global Burden of Disease 2015 Study (GBD 2015) provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015. These results informed an in-depth investigation of observed and expected mortality patterns based on sociodemographic measures.\n\n\nMETHODS\nWe estimated all-cause mortality by age, sex, geography, and year using an improved analytical approach originally developed for GBD 2013 and GBD 2010. Improvements included refinements to the estimation of child and adult mortality and corresponding uncertainty, parameter selection for under-5 mortality synthesis by spatiotemporal Gaussian process regression, and sibling history data processing. We also expanded the database of vital registration, survey, and census data to 14 294 geography-year datapoints. For GBD 2015, eight causes, including Ebola virus disease, were added to the previous GBD cause list for mortality. We used six modelling approaches to assess cause-specific mortality, with the Cause of Death Ensemble Model (CODEm) generating estimates for most causes. We used a series of novel analyses to systematically quantify the drivers of trends in mortality across geographies. First, we assessed observed and expected levels and trends of cause-specific mortality as they relate to the Socio-demographic Index (SDI), a summary indicator derived from measures of income per capita, educational attainment, and fertility. Second, we examined factors affecting total mortality patterns through a series of counterfactual scenarios, testing the magnitude by which population growth, population age structures, and epidemiological changes contributed to shifts in mortality. Finally, we attributed changes in life expectancy to changes in cause of death. We documented each step of the GBD 2015 estimation processes, as well as data sources, in accordance with Guidelines for Accurate and Transparent Health Estimates Reporting (GATHER).\n\n\nFINDINGS\nGlobally, life expectancy from birth increased from 61·7 years (95% uncertainty interval 61·4-61·9) in 1980 to 71·8 years (71·5-72·2) in 2015. Several countries in sub-Saharan Africa had very large gains in life expectancy from 2005 to 2015, rebounding from an era of exceedingly high loss of life due to HIV/AIDS. At the same time, many geographies saw life expectancy stagnate or decline, particularly for men and in countries with rising mortality from war or interpersonal violence. From 2005 to 2015, male life expectancy in Syria dropped by 11·3 years (3·7-17·4), to 62·6 years (56·5-70·2). Total deaths increased by 4·1% (2·6-5·6) from 2005 to 2015, rising to 55·8 million (54·9 million to 56·6 million) in 2015, but age-standardised death rates fell by 17·0% (15·8-18·1) during this time, underscoring changes in population growth and shifts in global age structures. The result was similar for non-communicable diseases (NCDs), with total deaths from these causes increasing by 14·1% (12·6-16·0) to 39·8 million (39·2 million to 40·5 million) in 2015, whereas age-standardised rates decreased by 13·1% (11·9-14·3). Globally, this mortality pattern emerged for several NCDs, including several types of cancer, ischaemic heart disease, cirrhosis, and Alzheimer's disease and other dementias. By contrast, both total deaths and age-standardised death rates due to communicable, maternal, neonatal, and nutritional conditions significantly declined from 2005 to 2015, gains largely attributable to decreases in mortality rates due to HIV/AIDS (42·1%, 39·1-44·6), malaria (43·1%, 34·7-51·8), neonatal preterm birth complications (29·8%, 24·8-34·9), and maternal disorders (29·1%, 19·3-37·1). Progress was slower for several causes, such as lower respiratory infections and nutritional deficiencies, whereas deaths increased for others, including dengue and drug use disorders. Age-standardised death rates due to injuries significantly declined from 2005 to 2015, yet interpersonal violence and war claimed increasingly more lives in some regions, particularly in the Middle East. In 2015, rotaviral enteritis (rotavirus) was the leading cause of under-5 deaths due to diarrhoea (146 000 deaths, 118 000-183 000) and pneumococcal pneumonia was the leading cause of under-5 deaths due to lower respiratory infections (393 000 deaths, 228 000-532 000), although pathogen-specific mortality varied by region. Globally, the effects of population growth, ageing, and changes in age-standardised death rates substantially differed by cause. Our analyses on the expected associations between cause-specific mortality and SDI show the regular shifts in cause of death composition and population age structure with rising SDI. Country patterns of premature mortality (measured as years of life lost [YLLs]) and how they differ from the level expected on the basis of SDI alone revealed distinct but highly heterogeneous patterns by region and country or territory. Ischaemic heart disease, stroke, and diabetes were among the leading causes of YLLs in most regions, but in many cases, intraregional results sharply diverged for ratios of observed and expected YLLs based on SDI. Communicable, maternal, neonatal, and nutritional diseases caused the most YLLs throughout sub-Saharan Africa, with observed YLLs far exceeding expected YLLs for countries in which malaria or HIV/AIDS remained the leading causes of early death.\n\n\nINTERPRETATION\nAt the global scale, age-specific mortality has steadily improved over the past 35 years; this pattern of general progress continued in the past decade. Progress has been faster in most countries than expected on the basis of development measured by the SDI. Against this background of progress, some countries have seen falls in life expectancy, and age-standardised death rates for some causes are increasing. Despite progress in reducing age-standardised death rates, population growth and ageing mean that the number of deaths from most non-communicable causes are increasing in most countries, putting increased demands on health systems.\n\n\nFUNDING\nBill & Melinda Gates Foundation.", "title": "" }, { "docid": "269387b9115c35ea339184bd175224d2", "text": "Whereas outdoor navigation systems typically rely upon GPS, indoor systems have to rely upon different techniques for localizing the user, as GPS signals cannot be received indoors. Over the past decade various indoor navigation systems have been developed. This paper provides a comprehensive overview of existing indoor navigation systems and analyzes the different techniques used for: (1) locating the user; (2) planning a path; (3) representing the environment; and (4) interacting with the user. Our survey identifies a number of research issues that could facilitate large scale deployment of indoor navigation systems.", "title": "" }, { "docid": "de50bb6d1f1d09ddc6a3da3de79d12d2", "text": "This paper is to describe an intelligent motorized wheel chair for handicapped person using voice and touch screen technology. It enables a disabled person to move around independently using a touch screen and a voice recognition application which is interfaced with motors through microcontroller. When we want to change the direction, the touch screen sensor is modeled to direct the user to required destination using direction keys on the screen and that values are given to microcontroller. Depending on the direction selected on the touch screen, microcontroller controls the wheel chair directions. This can also be controlled through simple voice commands using voice controller. The speech recognition system is easy to use programmable speech recognition circuit that is the system to be trained the words (or vocal utterances) the user wants the circuit to recognize. The speed controller works by varying the average voltage sent to the motor. This is done by switching the motors supply on and off very quickly using PWM technique. The methodology adopted is based on grouping a microcontroller with a speech recognition system and touch screen. Keywords— Speech recognition system, Touch Screen sensor,", "title": "" }, { "docid": "40ba65504518383b4ca2a6fabff261fe", "text": "Fig. 1. Noirot and Quennedey's original classification of insect exocrine glands, based on a rhinotermitid sternal gland. The asterisk indicates a subcuticular space. Abbreviations: C, cuticle; D, duct cells; G1, secretory cells class 1; G2, secretory cells class 2; G3, secretory cells class 3; S, campaniform sensilla (modified after Noirot and Quennedey, 1974). ‘Describe the differences between endocrine and exocrine glands’, it sounds a typical exam question from a general biology course during our time at high school. Because of their secretory products being released to the outside world, exocrine glands definitely add flavour to our lives. Everybody is familiar with their secretions, from the salty and perhaps unpleasantly smelling secretions from mammalian sweat glands to the sweet exudates of the honey glands used by some caterpillars to attract ants, from the most painful venoms of bullet ants and scorpions to the precious wax that honeybees use to make their nest combs. Besides these functions, exocrine glands are especially known for the elaboration of a broad spectrum of pheromonal substances, and can also be involved in the production of antibiotics, lubricants, and digestive enzymes. Modern research in insect exocrinology started with the classical works of Charles Janet, who introduced a histological approach to the insect world (Billen and Wilson, 2007). The French school of insect anatomy remained strong since then, and the commonly used classification of insect exocrine glands generally follows the pioneer paper of Charles Noirot and Andr e Quennedey (1974). These authors were leading termite researchers using their extraordinary knowledge on termite glands to understand related phenomena, such as foraging and reproductive behaviour. They distinguish between class 1 with secretory cells adjoining directly to the cuticle, and class 3 with bicellular units made up of a large secretory cell and its accompanying duct cell that carries the secretion to the exterior (Fig. 1). The original classification included also class 2 secretory cells, but these are very rare and are only found in sternal and tergal glands of a cockroach and many termites (and also in the novel nasus gland described in this issue!). This classification became universally used, with the rather strange consequence that the vast majority of insect glands is illogically made up of class 1 and class 3 cells. In a follow-up paper, the uncommon class 2 cells were re-considered as oenocyte homologues (Noirot and Quennedey, 1991). Irrespectively of these objections, their 1974 pioneer paper is a cornerstone of modern works dealing with insect exocrine glands, as is also obvious in the majority of the papers in this special issue. This paper already received 545 citations at Web of Science and 588 at Google Scholar (both on 24 Aug 2015), so one can easily say that all researchers working on insect glands consider this work truly fundamental. Exocrine glands are organs of cardinal importance in all insects. The more common ones include mandibular and labial", "title": "" }, { "docid": "2c33713709afcb3d903945aff096a7f2", "text": "This study investigates the relationship of strategic leadership behaviors with executive innovation influence and the moderating effects of top management team (TMT)’s tenure heterogeneity and social culture on that relationship. Using survey data from six countries comprising three social cultures, strategic leadership behaviors were found to have a strong positive relationship with executive influence on both product–market and administrative innovations. In addition, TMT tenure heterogeneity moderated the relationship of strategic leadership behaviors with executive innovation influence for both types of innovation, while social culture moderated that relationship only in the case of administrative innovation. Copyright  2005 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "6e63abd83cc2822f011c831234c6d2e7", "text": "The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-theart in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.", "title": "" }, { "docid": "7fab7940321a606b10225d14df46ce65", "text": "Domain adaptation aims to learn models on a supervised source domain that perform well on an unsupervised target. Prior work has examined domain adaptation in the context of stationary domain shifts, i.e. static data sets. However, with large-scale or dynamic data sources, data from a defined domain is not usually available all at once. For instance, in a streaming data scenario, dataset statistics effectively become a function of time. We introduce a framework for adaptation over non-stationary distribution shifts applicable to large-scale and streaming data scenarios. The model is adapted sequentially over incoming unsupervised streaming data batches. This enables improvements over several batches without the need for any additionally annotated data. To demonstrate the effectiveness of our proposed framework, we modify associative domain adaptation to work well on source and target data batches with unequal class distributions. We apply our method to several adaptation benchmark datasets for classification and show improved classifier accuracy not only for the currently adapted batch, but also when applied on future stream batches. Furthermore, we show the applicability of our associative learning modifications to semantic segmentation, where we achieve competitive results.", "title": "" }, { "docid": "106af615d24a2867fbfa78d963f64cab", "text": "The recent development of calibration algorithms has been driven into two major directions: (1) an increasing accuracy of mathematical approaches and (2) an increasing flexibility in usage by reducing the dependency on calibration objects. These two trends, however, seem to be contradictory since the overall accuracy is directly related to the accuracy of the pose estimation of the calibration object and therefore demanding large objects, while an increased flexibility leads to smaller objects or noisier estimation methods. The method presented in this paper aims to resolves this problem in two steps: First, we derive a simple closed-form solution with a shifted focus towards the equation of translation that only solves for the necessary hand-eye transformation. We show that it is superior in accuracy and robustness compared to traditional approaches. Second, we decrease the dependency on the calibration object to a single 3D-point by using a similar formulation based on the equation of translation which is much less affected by the estimation error of the calibration object's orientation. Moreover, it makes the estimation of the orientation obsolete while taking advantage of the higher accuracy and robustness from the first solution, resulting in a versatile method for continuous hand-eye calibration.", "title": "" }, { "docid": "cd8c01d37382bf20a20fe82a55615b99", "text": "Consumer trust is a critical enabler to the success of online retailing and knowledge is one important factor influencing the level of trust. However, there is no consensus on the relationship between knowledge and trust. Some studies argued a negative relationship between knowledge and trust while the others argued positive. This study discussed the relationship between knowledge, trust in online shopping, and the intention to go shopping online. The results revealed that knowledge is positively associated with trust and online shopping activities. In other words, people who know more about online shopping will trust and go shopping more online. Online retailing practice should make the public knowledgeable about online transaction security mechanisms to build userspsila trust in online shopping.", "title": "" }, { "docid": "748c2047817ad53abf60a26624612a9e", "text": "In this paper, we propose a new method to efficiently synthesi ze character motions that involve close contacts such as wearing a T-shirt, passing the arms through the strin gs of a knapsack, or piggy-back carrying an injured person. We introduce the concept of topology coordinates, i n which the topological relationships of the segments are embedded into the attributes. As a result, the computati on for collision avoidance can be greatly reduced for complex motions that require tangling the segments of the bo dy. Our method can be combinedly used with other prevalent frame-based optimization techniques such as inv erse kinematics.", "title": "" }, { "docid": "c438965615449efd728acec42be0b6d1", "text": "Human adults generally find fast tempos more arousing than slow tempos, with tempo frequently manipulated in music to alter tension and emotion. We used a previously published method [McDermott, J., & Hauser, M. (2004). Are consonant intervals music to their ears? Spontaneous acoustic preferences in a nonhuman primate. Cognition, 94(2), B11-B21] to test cotton-top tamarins and common marmosets, two new-World primates, for their spontaneous responses to stimuli that varied systematically with respect to tempo. Across several experiments, we found that both tamarins and marmosets preferred slow tempos to fast. It is possible that the observed preferences were due to arousal, and that this effect is homologous to the human response to tempo. In other respects, however, these two monkey species showed striking differences compared to humans. Specifically, when presented with a choice between slow tempo musical stimuli, including lullabies, and silence, tamarins and marmosets preferred silence whereas humans, when similarly tested, preferred music. Thus despite the possibility of homologous mechanisms for tempo perception in human and nonhuman primates, there appear to be motivational ties to music that are uniquely human.", "title": "" }, { "docid": "6082c0252dffe7903512e36f13da94eb", "text": "Thousands of storage tanks in oil refineries have to be inspected manually to prevent leakage and/or any other potential catastrophe. A wall climbing robot with permanent magnet adhesion mechanism equipped with nondestructive sensor has been designed. The robot can be operated autonomously or manually. In autonomous mode the robot uses an ingenious coverage algorithm based on distance transform function to navigate itself over the tank surface in a back and forth motion to scan the external wall for the possible faults using sensors without any human intervention. In manual mode the robot can be navigated wirelessly from the ground station to any location of interest. Preliminary experiment has been carried out to test the prototype.", "title": "" }, { "docid": "2bea747262e8801500d55d55e47f21d0", "text": "Multivariate time series (MTS) arise when multiple interconnected sensors record data over time. Dealing with this high-dimensional data is challenging for every classifier for at least two reasons: First, an MTS is not only characterized by individual feature values, but also by the interplay of features in different dimensions. Second, the high dimensionality typically adds large amounts of irrelevant data and noise. We present our novel MTS classifier WEASEL+MUSE which addresses both challenges. WEASEL+MUSE builds a multivariate feature vector, first using a sliding-window approach applied to each dimension of the MTS, then extracting discrete features per window and dimension. The feature vector is subsequently fed through feature selection, removing non-discriminative features, and analysed by a machine learning classifier. The novelty of WEASEL+MUSE lies in its specific way of extracting and filtering multivariate features from MTS by encoding context information into each feature. Still, the resulting feature set is small, yet very discriminative and useful for MTS classification. Based on a benchmark of 20 MTS datasets, we found that WEASEL+MUSE is among the most accurate state-of-the-art classifiers.", "title": "" }, { "docid": "8b519431416a4bac96a8a975d8973ef9", "text": "A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on ways to further improvement.", "title": "" }, { "docid": "a26dae152f7017aff5ecd1265914c48e", "text": "Algorithms that use point-cloud models make heavy use of the neighborhoods of the points. These neighborhoods are used to compute the surface normals for each point, mollification, and noise removal. All of these primitive operations require the seemingly repetitive process of finding the k nearest neighbors (kNNs) of each point. These algorithms are primarily designed to run in main memory. However, rapid advances in scanning technologies have made available point-cloud models that are too large to fit in the main memory of a computer. This calls for more efficient methods of computing the kNNs of a large collection of points many of which are already in close proximity. A fast kNN algorithm is presented that makes use of the locality of successive points whose k nearest neighbors are sought to reduce significantly the time needed to compute the neighborhood needed for the primitive operation as well as enable it to operate in an environment where the data is on disk. Results of experiments demonstrate an order of magnitude improvement in the time to perform the algorithm and several orders of magnitude improvement in work efficiency when compared with several prominent existing methods. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ce82b53bc47ea8ca9c6bdfb5421a5210", "text": "Max Planck Institute for Biogeochemistry, Hans-Knöll-Strasse 10, 07745 Jena, Germany, German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Deutscher Platz 5, 04103 Leipzig, Germany, Department of Forest Resources, University of Minnesota, St Paul, MN 55108, USA, Department of Computer Science and Engineering, University of Minnesota, Twin Cities, USA, Microsoft Corporation, One Microsoft Way, Redmond, WA 98052, USA, Instituto Multidisciplinario de Biología Vegetal (IMBIV – CONICET) and Departamento de Diversidad Biológica y Ecología, FCEFyN, Universidad Nacional de Córdoba, CC 495, 5000, Córdoba, Argentina, Royal Botanic Gardens Kew, Wakehurst Place, RH17 6TN, UK, Center for Biodiversity Management, Yungaburra 4884, Queensland, Australia, Centre National de la Recherche Scientifique, Grenoble, France, Laboratoire ESE, Université Paris-Sud, UMR 8079 CNRS, UOS, AgroParisTech, 91405 Orsay, France, University of Leipzig, Leipzig, Germany, Department of Biological Sciences, Macquarie University, NSW 2109, Australia, Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Republic of Panama, Hawkesbury Institute for the Environment, University of Western Sydney, Locked Bag 1797, Penrith, NSW 2751 Australia ABSTRACT", "title": "" } ]
scidocsrr
4f2b4e3c88bc2e9af9991a0f0a7c8e36
Learning a Driving Simulator
[ { "docid": "acc526dd0d86c5bf83034b3cd4c1ea38", "text": "We describe a learning-based approach to handeye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.", "title": "" } ]
[ { "docid": "3a7d3f98e4501e04e68334d492ad2df8", "text": "Several studies focused on single human activity recognition, while the classification of group activities is still under-investigated. In this paper, we present an approach for classifying the activity performed by a group of people during daily life tasks at work. We address the problem in a hierarchical way by first examining individual person actions, reconstructed from data coming from wearable and ambient sensors. We then observe if common temporal/spatial dynamics exist at the level of group activity. We deployed a Multimodal Deep Learning Network, where the term multimodal is not intended to separately elaborate the considered different input modalities, but refers to the possibility of extracting activity-related features for each group member, and then merge them through shared levels. We evaluated the proposed approach in a laboratory environment, where the employees are monitored during their normal activities. The experimental results demonstrate the effectiveness of the proposed model with respect to an SVM benchmark.", "title": "" }, { "docid": "6c0a3c6c6f7faf928ad8e40f9f9b341c", "text": "This is the fourth part of a series of papers that provide a comprehensive survey of techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty. Part I [1] and Part II [2] deal with target motion models. Part III [3] covers the measurement models and the associated techniques. This part surveys tracking techniques that are based on decisions regarding target maneuver. Three classes of techniques are identified and described: equivalent noise, input detection and estimation, and switching model. Maneuver detection methods are also included.", "title": "" }, { "docid": "b2bf48c6c443f8fb39f79d2c9c0714f3", "text": "We review drug addiction from the perspective of the hypothesis that drugs of abuse interact with distinct brain memory systems. We focus on emotional and procedural forms of memory, encompassing Pavlovian and instrumental conditioning, both for action-outcome and for stimulus-response associations. Neural structures encompassed by these systems include the amygdala, hippocampus, nucleus accumbens, and dorsal striatum. Additional influences emanate from the anterior cingulate and prefrontal cortex, which are implicated in the encoding and retrieval of drug-related memories that lead to drug craving and drug use. Finally, we consider the ancillary point that chronic abuse of many drugs may impact directly on neural memory systems via neuroadaptive and neurotoxic effects that lead to cognitive impairments in which memory dysfunction is prominent.", "title": "" }, { "docid": "8f3b28c1b271652136ac43f420e92dc3", "text": "In this paper, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although convolutional neural networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve the CNN-based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark data sets demonstrate our method yields the state-of-the-art performance with competitive inference time.11Our source code is available at https://github.com/wenguanwang/deepattention.", "title": "" }, { "docid": "c88370dfcf79534c019fd797f055f393", "text": "Mobile Online Social Networks (mOSNs) have recently grown in popularity. With the ubiquitous use of mobile devices and a rapid shift of technology and access to OSNs, it is important to examine the impact of mobile OSNs from a privacy standpoint. We present a taxonomy of ways to study privacy leakage and report on the current status of known leakages. We find that all mOSNs in our study exhibit some leakage of private information to third parties. Novel concerns include combination of new features unique to mobile access with the leakage in OSNs that we had examined earlier.", "title": "" }, { "docid": "34c343413fc748c1fc5e07fb40e3e97d", "text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.", "title": "" }, { "docid": "b80bb16e8f5bff921304908c5731c158", "text": "Internet and networks applications are growing very fast, so the needs to protect such applications are increased. Encryption algorithms play a main role in information security systems. . In this paper, we compare the various cryptographic algorithms. On the basis of parameter taken as time various cryptographic algorithms are evaluated on different video files. Different video files are having different processing speed on which various size of file are processed. Calculation of time for encryption and decryption in different video file format such as .vob, and .DAT, having file size for audio and for video 1MB to 1100MB respectively. Encryption processing time and decryption processing time are compared between various cryptographic algorithms which come out to be not too much. Overall time depend on the corresponding file size. Throughput analysis also done.", "title": "" }, { "docid": "a602a532a7b95eae050d084e10606951", "text": "Municipal solid waste management has emerged as one of the greatest challenges facing environmental protection agencies in developing countries. This study presents the current solid waste management practices and problems in Nigeria. Solid waste management is characterized by inefficient collection methods, insufficient coverage of the collection system and improper disposal. The waste density ranged from 280 to 370 kg/m3 and the waste generation rates ranged from 0.44 to 0.66 kg/capita/day. The common constraints faced environmental agencies include lack of institutional arrangement, insufficient financial resources, absence of bylaws and standards, inflexible work schedules, insufficient information on quantity and composition of waste, and inappropriate technology. The study suggested study of institutional, political, social, financial, economic and technical aspects of municipal solid waste management in order to achieve sustainable and effective solid waste management in Nigeria.", "title": "" }, { "docid": "2900e8ffec7d67f453204cbe38f09471", "text": "This paper considers how we feel about the content we see or hear. As opposed to the cognitive content information composed of the facts about the genre, temporal content structures and spatiotemporal content elements, we are interested in obtaining the information about the feelings, emotions, and moods evoked by a speech, audio, or video clip. We refer to the latter as the affective content, and to the terms such as happy or exciting as the affective labels of an audiovisual signal. In the first part of the paper, we explore the possibilities for representing and modeling the affective content of an audiovisual signal to effectively bridge the affective gap. Without loosing generality, we refer to this signal simply as video, which we see as an image sequence with an accompanying soundtrack. Then, we show the high potential of the affective video content analysis for enhancing the content recommendation functionalities of the future PVRs and VOD systems. We conclude this paper by outlining some interesting research challenges in the field", "title": "" }, { "docid": "a0ebe19188abab323122a5effc3c4173", "text": "In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes.", "title": "" }, { "docid": "fbcdb3d565519b47922394dc9d84985f", "text": "We present a novel end-to-end trainable neural network model for task-oriented dialog systems. The model is able to track dialog state, issue API calls to knowledge base (KB), and incorporate structured KB query results into system responses to successfully complete task-oriented dialogs. The proposed model produces well-structured system responses by jointly learning belief tracking and KB result processing conditioning on the dialog history. We evaluate the model in a restaurant search domain using a dataset that is converted from the second Dialog State Tracking Challenge (DSTC2) corpus. Experiment results show that the proposed model can robustly track dialog state given the dialog history. Moreover, our model demonstrates promising results in producing appropriate system responses, outperforming prior end-to-end trainable neural network models using per-response accuracy evaluation metrics.", "title": "" }, { "docid": "0f1f6570abf200de786221f28210ed78", "text": "This paper presents a novel idea for reducing the data storage problems in the self-driving cars. Self-driving cars is a technology that is observed by the modern word with most curiosity. However the vulnerability with the car is the growing data and the approach for handling such huge amount of data growth. This paper proposes a cloud based self-driving car which can optimize the data storage problems in such cars. The idea is to not store any data in the car, rather download everything from the cloud as per the need of the travel. This allows the car to not keep a huge amount of data and rely on a cloud infrastructure for the drive.", "title": "" }, { "docid": "a00fe5032a5e1835120135e6e504d04b", "text": "Perfect information Monte Carlo (PIMC) search is the method of choice for constructing strong Al systems for trick-taking card games. PIMC search evaluates moves in imperfect information games by repeatedly sampling worlds based on state inference and estimating move values by solving the corresponding perfect information scenarios. PIMC search performs well in trick-taking card games despite the fact that it suffers from the strategy fusion problem, whereby the game's information set structure is ignored because moves are evaluated opportunistically in each world. In this paper we describe imperfect information Monte Carlo (IIMC) search, which aims at mitigating this problem by basing move evaluation on more realistic playout sequences rather than perfect information move values. We show that RecPIMC - a recursive IIMC search variant based on perfect information evaluation - performs considerably better than PIMC search in a large class of synthetic imperfect information games and the popular card game of Skat, for which PIMC search is the state-of-the-art cardplay algorithm.", "title": "" }, { "docid": "bdc9bc09af90bd85f64c79cbca766b61", "text": "The inhalation route is frequently used to administer drugs for the management of respiratory diseases such as asthma or chronic obstructive pulmonary disease. Compared with other routes of administration, inhalation offers a number of advantages in the treatment of these diseases. For example, via inhalation, a drug is directly delivered to the target organ, conferring high pulmonary drug concentrations and low systemic drug concentrations. Therefore, drug inhalation is typically associated with high pulmonary efficacy and minimal systemic side effects. The lung, as a target, represents an organ with a complex structure and multiple pulmonary-specific pharmacokinetic processes, including (1) drug particle/droplet deposition; (2) pulmonary drug dissolution; (3) mucociliary and macrophage clearance; (4) absorption to lung tissue; (5) pulmonary tissue retention and tissue metabolism; and (6) absorptive drug clearance to the systemic perfusion. In this review, we describe these pharmacokinetic processes and explain how they may be influenced by drug-, formulation- and device-, and patient-related factors. Furthermore, we highlight the complex interplay between these processes and describe, using the examples of inhaled albuterol, fluticasone propionate, budesonide, and olodaterol, how various sequential or parallel pulmonary processes should be considered in order to comprehend the pulmonary fate of inhaled drugs.", "title": "" }, { "docid": "a3bce6c544a08e48a566a189f66d0131", "text": "Model-free episodic reinforcement learning problems define the environment reward with functions that often provide only sparse information throughout the task. Consequently, agents are not given enough feedback about the fitness of their actions until the task ends with success or failure. Previous work addresses this problem with reward shaping. In this paper we introduce a novel approach to improve modelfree reinforcement learning agents’ performance with a three step approach. Specifically, we collect demonstration data, use the data to recover a linear function using inverse reinforcement learning and we use the recovered function for potential-based reward shaping. Our approach is model-free and scalable to high dimensional domains. To show the scalability of our approach we present two sets of experiments in a two dimensional Maze domain, and the 27 dimensional Mario AI domain. We compare the performance of our algorithm to previously introduced reinforcement learning from demonstration algorithms. Our experiments show that our approach outperforms the state-of-the-art in cumulative reward, learning rate and asymptotic performance.", "title": "" }, { "docid": "1fde86a3105684900bc51e29c84661ca", "text": "During the last few years, Wireless Body Area Networks (WBANs) have emerged into many application domains, such as medicine, sport, entertainments, military, and monitoring. This emerging networking technology can be used for e-health monitoring. In this paper, we review the literature and investigate the challenges in the development architecture of WBANs. Then, we classified the challenges of WBANs that need to be addressed for their development. Moreover, we investigate the various diseases and healthcare systems and current state-ofthe-art of applications and mainly focus on the remote monitoring for elderly and chronically diseases patients. Finally, relevant research issues and future development are discussed. Keywords—Wireless body area networks; review; challenges; applications; architecture; radio technologies; telemedicine", "title": "" }, { "docid": "6b3da7a62570e083c2ca27a4287d6d8d", "text": "In the area of biped robot research, much progress has been made in the past few years. However, some difficulties remain to be dealt with, particularly about the implementation of fast and dynamic walking gaits, in other words anthropomorphic gaits, especially on uneven terrain. In this perspective, both concepts of center of pressure (CoP) and zero moment point (ZMP) are obviously useful. In this paper, the two concepts are strictly defined, the CoP with respect to ground-feet contact forces, the ZMP with respect to gravity plus inertia forces. Then, the coincidence of CoP and ZMP is proven, and related control aspects are examined. Finally, a virtual CoP-ZMP is defined, allowing us to extend the concept when walking on uneven terrain. This paper is a theoretical study. Experimental results are presented in a companion paper, analyzing the evolution of the ground contact forces obtained from a human walker wearing robot feet as shoes.", "title": "" }, { "docid": "a719c5020e5398a1f49f5fdfa2dc065e", "text": "Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER). However, this typically requires large amounts of labeled data. In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning. While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining. To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory (LSTM) tag decoder. The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models. We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25% of the original training data.", "title": "" }, { "docid": "bbfc24f527f03ca803953d19ccb2650b", "text": "Customer churn has emerged as a critical issue for Customer Relationship Management and customer retention in the telecommunications industry, thus churn prediction is necessary and valuable to retain the customers and reduce the losses. Moreover, high predictive accuracy and good interpretability of the results are two key measures of a classification model. More studies have shown that single model-based classification methods may not be good enough to achieve a satisfactory result. To obtain more accurate predictive results, we present a novel hybrid model-based learning system, which integrates the supervised and unsupervised techniques for predicting customer behaviour. The system combines a modified k-means clustering algorithm and a classic rule inductive technique (FOIL). Three sets of experiments were carried out on telecom datasets. One set of the experiments is for verifying that the weighted k-means clustering can lead to a better data partitioning results; the second set of experiments is for evaluating the classification results, and comparing it to other well-known modelling techniques; the last set of experiment compares the proposed hybrid-model system with several other recently proposed hybrid classification approaches. We also performed a comparative study on a set of benchmarks obtained from the UCI repository. All the results show that the hybrid model-based learning system is very promising and outperform the existing models. With recent evolution in the Information and Communication Technology (ICT) sector, numerous new and attractive services have been introduced, and they put huge pressure on traditional services. Customer churn has emerged as one of the major issues in Customer Relationship Management (CRM) in telecommunica-tion services around the world, for both wireless providers and long-distance carriers. For instance, in the U.S., telecom providers of long-distance and international services have been bearing the churn rates from 45% to 70% percent for some years (Mattison, 2001). Under the fierce competitive environment, it becomes very important for the telecom operators to retain their existing customers as acquiring new customers is much more expensive. Consequently , predicting which customers are likely to stop their subscription and switch to competitors (churn) is critical. Predicting the potential churners and successfully retain them, especially the valuable ones, can substantially increase the profitability of a company. In the telecommunications industry, operators usually capture the transactional data, which reflects the service usage, and some static data such as subscriber's personal information and contract details. Data mining (DM) methods have emerged as a good alternative to study the …", "title": "" }, { "docid": "db35a26248d43d5fbf5a0bad0fdd1463", "text": "Place is an essential concept in human discourse. It is people's interaction and experience with their surroundings that identify place from non-place in space. This paper explores the use of spatial footprints as a record of human interaction with the environment. Specifically, we use geotagged photos collected in Flickr to provide a collective view of sense of place, in terms of significance and location. Spatial footprints associated with photographs can not only describe individual place locations and spatial extents but also the relationship between places, such as hierarchy. This type of information about place may be utilized to study the way people understand their landscape, or can be incorporated into existing gazetteers for geographic information retrieval and location-based services. Other sources of user-generated geographic information, such as Foursquare and Twitter, may also be harvested and aggregated to study place in a similar way.", "title": "" } ]
scidocsrr
cd0d3ccc4fe4a2234c3b2f0d7641b99a
Constrained iterative LQR for on-road autonomous driving motion planning
[ { "docid": "44faf0dd15da256cdbf5bf58e1b5a775", "text": "We describe a practical path-planning algorithm that generates smooth paths for an autonomous vehicle operating in an unknown environment, where obstacles are detected online by the robot’s sensors. This work was motivated by and experimentally validated in the 2007 DARPA Urban Challenge, where robotic vehicles had to autonomously navigate parking lots. Our approach has two main steps. The first step uses a variant of the well-known A* search algorithm, applied to the 3D kinematic state space of the vehicle, but with a modified state-update rule that captures the continuous state of the vehicle in the discrete nodes of A* (thus guaranteeing kinematic feasibility of the path). The second step then improves the quality of the solution via numeric non-linear optimization, leading to a local (and frequently global) optimum. The path-planning algorithm described in this paper was used by the Stanford Racing Teams robot, Junior, in the Urban Challenge. Junior demonstrated flawless performance in complex general path-planning tasks such as navigating parking lots and executing U-turns on blocked roads, with typical fullcycle replaning times of 50–300ms. Introduction and Related Work We address the problem of path planning for an autonomous vehicle operating in an unknown environment. We assume the robot has adequate sensing and localization capability and must replan online while incrementally building an obstacle map. This scenario was motivated, in part, by the DARPA Urban Challenge, in which vehicles had to freely navigate parking lots. The path-planning algorithm described below was used by the Stanford Racing Team’s robot, Junior in the Urban Challenge (DARPA 2007). Junior (Figure 1) demonstrated flawless performance in complex general path-planning tasks—many involving driving in reverse—such as navigating parking lots, executing Uturns, and dealing with blocked roads and intersections with typical full-cycle replanning times of 50–300ms on a modern PC. One of the main challenges in developing a practical path planner for free navigation zones arises from the fact that the space of all robot controls—and hence trajectories—is continuous, leading to a complex continuous-variable optimization landscape. Much of prior work on search algorithms for Copyright c © 2008, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Junior, our entry in the DARPA Urban Challenge, was used in all experiments. Junior is equipped with several LIDAR and RADAR units, and a high-accuracy inertial measurement system. path planning (Ersson and Hu 2001; Koenig and Likhachev 2002; Ferguson and Stentz 2005; Nash et al. 2007) yields fast algorithms for discrete state spaces, but those algorithms tend to produce paths that are non-smooth and do not generally satisfy the non-holonomic constraints of the vehicle. An alternative approach that guarantees kinematic feasibility is forward search in continuous coordinates, e.g., using rapidly exploring random trees (RRTs) (Kavraki et al. 1996; LaValle 1998; Plaku, Kavraki, and Vardi 2007). The key to making such continuous search algorithms practical for online implementations lies in an efficient guiding heuristic. Another approach is to directly formulate the path-planning problem as a non-linear optimization problem in the space of controls or parametrized curves (Cremean et al. 2006), but in practice guaranteeing fast convergence of such programs is difficult due to local minima. Our algorithm builds on the existing work discussed above, and consists of two main phases. The first step uses a heuristic search in continuous coordinates that guarantees kinematic feasibility of computed trajectories. While lacking theoretical optimality guarantees, in practice this first", "title": "" }, { "docid": "768336582eb1aece4454ec461f3840d2", "text": "This paper presents an Iterative Linear Quadratic Regulator (ILQR) me thod for locally-optimal feedback control of nonlinear dynamical systems. The method is applied to a musculo-s ke etal arm model with 10 state dimensions and 6 controls, and is used to compute energy-optimal reach ing movements. Numerical comparisons with three existing methods demonstrate that the new method converge s substantially faster and finds slightly better solutions.", "title": "" }, { "docid": "c708834dc328b9ab60471535bdd37cf0", "text": "Trajectory optimizers are a powerful class of methods for generating goal-directed robot motion. Differential Dynamic Programming (DDP) is an indirect method which optimizes only over the unconstrained control-space and is therefore fast enough to allow real-time control of a full humanoid robot on modern computers. Although indirect methods automatically take into account state constraints, control limits pose a difficulty. This is particularly problematic when an expensive robot is strong enough to break itself. In this paper, we demonstrate that simple heuristics used to enforce limits (clamping and penalizing) are not efficient in general. We then propose a generalization of DDP which accommodates box inequality constraints on the controls, without significantly sacrificing convergence quality or computational effort. We apply our algorithm to three simulated problems, including the 36-DoF HRP-2 robot. A movie of our results can be found here goo.gl/eeiMnn.", "title": "" } ]
[ { "docid": "fbdbc870a78d9ee19446f3bb57731688", "text": "Because of the intangible and highly uncertain nature of innovation, investors may have difficulty processing information associated with a firm’s innovation and innovation search strategy. Due to cognitive and strategic biases, investors are likely to pay more attention to novel and explorative patents rather than incremental and exploitative patents. We find that firms focusing on exploitation rather than exploration tend to generate superior subsequent operating performance. Analysts do not seem to detect this, as firms currently focused on exploitation tend to outperform the market’s near-term earnings expectations. The market also seems unable to accurately incorporate innovation strategy information. We find that firms with exploitation strategies are undervalued relative to firms with exploration strategies and that this return differential is incremental to standard risk and innovation-based pricing factors examined in the prior literature. This result suggests a more nuanced view on whether stock market pressure hampers innovation.", "title": "" }, { "docid": "1c5ab22135bb293919022585bae160ef", "text": "Job satisfaction and employee performance has been a topic of research for decades. Whether job satisfaction influences employee satisfaction in organizations remains a crucial issue to managers and psychologists. That is where the problem lies. Therefore, the objective of this paper is to trace the relationship between job satisfaction and employee performance in organizations with particular reference to Nigeria. Related literature on the some theories of job satisfaction such as affective events, two-factor, equity and job characteristics was reviewed and findings from these theories indicate that a number of factors like achievement, recognition, responsibility, pay, work conditions and so on, have positive influence on employee performance in organizations. The paper adds to the theoretical debate on whether job satisfaction impacts positively on employee performance. It concludes that though the concept of job satisfaction is complex, using appropriate variables and mechanisms can go a long way in enhancing employee performance. It recommends that managers should use those factors that impact employee performance to make them happy, better their well being and the environment. It further specifies appropriate mechanisms using a theoretical approach to support empirical approaches which often lack clarity as to why the variables are related.", "title": "" }, { "docid": "227874c489b6599583f4f5a3698491ed", "text": "Since the knee joint bears the full weight load of the human body and the highest pressure loads while providing flexible movement, it is the body part most vulnerable and susceptible to osteoarthritis. In exercise therapy, the early rehabilitation stages last for approximately six weeks, during which the patient works with the physical therapist several times each week. The patient is afterwards given instructions for continuing rehabilitation exercise by him/herself at home. This study develops a rehabilitation exercise assessment mechanism using three wearable sensors mounted on the chest, thigh and shank of the working leg in order to enable the patients with knee osteoarthritis to manage their own rehabilitation progress. In this work, time-domain, frequency-domain features and angle information of the motion sensor signals are used to classify the exercise type and identify whether their postures are proper or not. Three types of rehabilitation exercise commonly prescribed to knee osteoarthritis patients are: Short-Arc Exercise, Straight Leg Raise, and Quadriceps Strengthening Mini-squats. After ten subjects performed the three kinds of rehabilitation activities, three validation techniques including 10-fold cross-validation, within subject cross validation, and leave-one-subject cross validation are utilized to confirm the proposed mechanism. The overall recognition accuracy for exercise type classification is 97.29% and for exercise posture identification it is 88.26%. The experimental results demonstrate the feasibility of the proposed mechanism which can help patients perform rehabilitation movements and progress effectively. Moreover, the proposed mechanism is able to detect multiple errors at once, fulfilling the requirements for rehabilitation assessment.", "title": "" }, { "docid": "d21213e0dbef657d5e7ec8689fe427ed", "text": "Cutaneous infections due to Listeria monocytogenes are rare. Typically, infections manifest as nonpainful, nonpruritic, self-limited, localized, papulopustular or vesiculopustular eruptions in healthy persons. Most cases follow direct inoculation of the skin in veterinarians or farmers who have exposure to animal products of conception. Less commonly, skin lesions may arise from hematogenous dissemination in compromised hosts with invasive disease. Here, we report the first case in a gardener that occurred following exposure to soil and vegetation.", "title": "" }, { "docid": "5793b2b2edbcb1443be7de07406f0fd2", "text": "Question answering is a complex and valuable task in natural language processing and artificial intelligence. Several deep learning models having already been proposed to solve it. In this work, we propose a deep learning model with an attention mechanism that is based on a previous work and a decoder that incorporates a wide summary of the context and question. That summary includes a condensed representation of the question, a context paragraph representation previous created by the model, as well as positional question summaries created by the attention mechanism. We demonstrate that a strong attention layer allows a deep learning model to do well even on long questions and context paragraphs in addition to contributing significantly to model performance.", "title": "" }, { "docid": "0b0e389556e7c132690d7f2a706664d1", "text": "E-government challenges are well researched in literature and well known by governments. However, being aware of the challenges of e-government implementation is not sufficient, as challenges may interrelate and impact each other. Therefore, a systematic analysis of the challenges and their interrelationships contributes to providing a better understanding of how to tackle the challenges and how to develop sustainable solutions. This paper aims to investigate existing challenges of e-government and their interdependencies in Tanzania. The collection of e-government challenges in Tanzania is implemented through interviews, desk research and observations of actors in their job. In total, 32 challenges are identified. The subsequent PESTEL analysis studied interrelationships of challenges and identified 34 interrelationships. The analysis of the interrelationships informs policy decision makers of issues to focus on along the planning of successfully implementing the existing e-government strategy in Tanzania. The study also identified future research needs in evaluating the findings through quantitative analysis.", "title": "" }, { "docid": "ea1b0f4e82ac9ad8593c5e4ba1567a59", "text": "This paper describes an emerging shared repository of large-text resources for creating word vectors, including pre-processed corpora and pre-trained vectors for a range of frameworks and configurations. This will facilitate reuse, rapid experimentation, and replicability of results.", "title": "" }, { "docid": "8db59f20491739420d9b40311705dbf1", "text": "With object-oriented programming languages, Object Relational Mapping (ORM) frameworks such as Hibernate have gained popularity due to their ease of use and portability to different relational database management systems. Hibernate implements the Java Persistent API, JPA, and frees a developer from authoring software to address the impedance mismatch between objects and relations. In this paper, we evaluate the performance of Hibernate by comparing it with a native JDBC implementation using a benchmark named BG. BG rates the performance of a system for processing interactive social networking actions such as view profile, extend an invitation from one member to another, and other actions. Our key findings are as follows. First, an object-oriented Hibernate implementation of each action issues more SQL queries than its JDBC counterpart. This enables the JDBC implementation to provide response times that are significantly faster. Second, one may use the Hibernate Query Language (HQL) to refine the object-oriented Hibernate implementation to provide performance that approximates the JDBC implementation.", "title": "" }, { "docid": "6120ff5b69c535e8580a3930b1edf3f2", "text": "C. Monroe,1 R. Raussendorf,2 A. Ruthven,2 K. R. Brown,3 P. Maunz,4,* L.-M. Duan,5 and J. Kim4 1Joint Quantum Institute, University of Maryland Department of Physics and National Institute of Standards and Technology, College Park, Maryland 20742, USA 2Department of Physics and Astronomy, University of British Columbia, Vancouver, British Columbia V6T1Z1, Canada 3Schools of Chemistry and Biochemistry; Computational Science and Engineering; and Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA 4Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina 27708, USA 5Department of Physics and MCTP, University of Michigan, Ann Arbor, Michigan 48109, USA and Center for Quantum Information, Tsinghua University, Beijing 100084, China (Received 22 June 2013; published 13 February 2014)", "title": "" }, { "docid": "ef15ffc5609653488c68364d2ba77149", "text": "BACKGROUND\nBeneficial effects of probiotics have never been analyzed in an animal shelter.\n\n\nHYPOTHESIS\nDogs and cats housed in an animal shelter and administered a probiotic are less likely to have diarrhea of ≥2 days duration than untreated controls.\n\n\nANIMALS\nTwo hundred and seventeen cats and 182 dogs.\n\n\nMETHODS\nDouble blinded and placebo controlled. Shelter dogs and cats were housed in 2 separate rooms for each species. For 4 weeks, animals in 1 room for each species was fed Enterococcus faecium SF68 while animals in the other room were fed a placebo. After a 1-week washout period, the treatments by room were switched and the study continued an additional 4 weeks. A standardized fecal score system was applied to feces from each animal every day by a blinded individual. Feces of animals with and without diarrhea were evaluated for enteric parasites. Data were analyzed by a generalized linear mixed model using a binomial distribution with treatment being a fixed effect and the room being a random effect.\n\n\nRESULTS\nThe percentage of cats with diarrhea ≥2 days was significantly lower (P = .0297) in the probiotic group (7.4%) when compared with the placebo group (20.7%). Statistical differences between groups of dogs were not detected but diarrhea was uncommon in both groups of dogs during the study.\n\n\nCONCLUSION AND CLINICAL IMPORTANCE\nCats fed SF68 had fewer episodes of diarrhea of ≥2 days when compared with controls suggests the probiotic may have beneficial effects on the gastrointestinal tract.", "title": "" }, { "docid": "e007e34cfc7425ec1b5b0071cf69937c", "text": "Dinev and Hu 2007 332 IS professionals and students USA Survey Theory of Planned Behavior Intention to use protective information technologies Furnell et al. 2007 415 UK residents UK Survey NA Safe behavior, knowledge-seeking behavior Lee and Kozar 2005 212 Internet Users USA Survey Theory of Planned Behavior, IT Innovation Adoption of an anti-spyware system Liang and Xue 2009 NA NA Theory building PMT, Cybernetic Process Theory Problem-focused and emotionfocused coping behavior Woon et al. 2005 189 students and faculty Singapore Survey PMT Have enabled/ have not enabled a firewall on home wireless network", "title": "" }, { "docid": "59a1088003576f2e75cdbedc24ae8bdf", "text": "Two literatures or sets of articles are complementary if, considered together, they can reveal useful information of scientik interest not apparent in either of the two sets alone. Of particular interest are complementary literatures that are also mutually isolated and noninteractive (they do not cite each other and are not co-cited). In that case, the intriguing possibility akrae that thm &tfnrmnt;nn n&wd hv mwnhXno them 4. nnvnl Lyww u-c “‘1 YLL”I&.L.sU”4L 6uy’“s. u, b..S..“Y.Ayj .a.-** Y ..u. -... During the past decade, we have identified seven examples of complementary noninteractive structures in the biomedical literature. Each structure led to a novel, plausible, and testable hypothesis that, in several cases, was subsequently corroborated by medical researchers through clinical or laboratory investigation. We have also developed, tested, and described a systematic, computer-sided approach to iinding and identifying complementary noninteractive literatures. Specialization, Fragmentation, and a Connection Explosion By some obscure spontaneous process scientists have responded to the growth of science by organizing their work into soecialties, thus permitting each individual to -r-~ focus on a small part of the total literature. Specialties that grow too large tend to divide into subspecialties that have their own literatures which, by a process of repeated splitting, maintain more or less fixed and manageable size. As the total literature grows, the number of specialties, but not in general the size of each, increases (Kochen, 1963; Swanson, 199Oc). But the unintended consequence of specialization is fragmentation. By dividing up the pie, the potential relationships among its pieces tend to be neglected. Although scientific literature cannot, in the long run, grow disproportionately to the growth of the communities and resources that produce it, combinations of implicitlyrelated segments of literature can grow much faster than the literature itself and can readily exceed the capacity of the community to identify and assimilate such relatedness (Swanson, 1993). The signilicance of the “information explosion” thus may lie not in an explosion of quantity per se, but in an incalculably greater combinatorial explosion of unnoticed and unintended logical connections. The Significance of Complementary Noninteractive Literatures If two literatures each of substantial size are linked by arguments that they respectively put forward -that is, are “logically” related, or complementary -one would expect to gain usefui information by combining them. For example, suppose that one (biomedical) literature establishes that some environmental factor A influences certain internal physiological conditions and a second literature establishes that these same physiological changes influence the course of disease C. Presumably, then, anyone who reads both literatures could conclude that factor A might influence disease C. Under such --->!L---f -----l-----ry-?r-. ----.---,a ?1-_----_I rl-conamons or comptementdnty one woum dtso expect me two literatures to refer to each other. If, however, the two literatures were developed independently of one another, the logical l inkage illustrated may be both unintended and unnoticed. To detect such mutual isolation, we examine the citation pattern. If two literatures are “noninteractive” that ir if thmv hnvm n~.rer fnr odAnm\\ kppn &ml = ulyc 1U) a. “W, na6L.V ..Y.“. ,“a vva&“..n] “W.. UluIu together, and if neither cites the other, then it is possible that scientists have not previously considered both iiteratures together, and so it is possible that no one is aware of the implicit A-C connection. The two conditions, complementarily and noninteraction, describe a model structure that shows how useful information can remain undiscovered even though its components consist of public knowledge (Swanson, 1987,199l). Public Knowledge / Private Knowledge There is, of course, no way to know in any particular case whether the possibility of an AC relationship in the above model has or has not occurred to someone, or whether or not anyone has actually considered the two literatures on A and C together, a private matter that necessarily remains conjectural. However, our argument is based only on determining whether there is any printed evidence to the contrary. We are concerned with public rather than Data Mining: Integration Q Application 295 From: KDD-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. private knowledge -with the state of the record produced rather than the state of mind of the producers (Swanson, 1990d). The point of bringing together the AB and BC literatures, in any event, is not to \"prove\" an AC linkage, (by considering only transitive relationships) but rather call attention to an apparently unnoticed association that may be worth investigating. In principle any chain of scientific, including analogic, reasoning in which different links appear in noninteractive literatures may lead to the discovery of new interesting connections. \"What people know\" is a common u derstanding of what is meant by \"knowledge\". If taken in this subjective sense, the idea of \"knowledge discovery\" could mean merely that someone discovered something they hadn’t known before. Our focus in the present paper is on a second sense of the word \"knowledge\", a meaning associated with the products of human i tellectual activity, as encoded in the public record, rather than with the contents of the human mind. This abstract world of human-created \"objective\" knowledge is open to exploration and discovery, for it can contain territory that is subjectively unknown to anyone (Popper, 1972). Our work is directed toward the discovery of scientificallyuseful information implicit in the public record, but not previously made xplicit. The problem we address concerns structures within the scientific literature, not within the mind. The Process of Finding Complementary Noninteractive Literatures During the past ten years, we have pursued three goals: i) to show in principle how new knowledge might be gained by synthesizing logicallyrelated noninteractive literatures; ii) to demonstrate that such structures do exist, at least within the biomedical literature; and iii) to develop a systematic process for finding them. In pursuit of goal iii, we have created interactive software and database arch strategies that can facilitate the discovery of complementary st uctures in the published literature of science. The universe or searchspace under consideration is limited only by the coverage of the major scientific databases, though we have focused primarily on the biomedical field and the MEDLINE database (8 million records). In 1991, a systematic approach to finding complementary structures was outlined and became a point of departure for software development (Swanson, 1991). The system that has now taken shape is based on a 3-way interaction between computer software, bibliographic databases, and a human operator. Tae interaction generates information structtues that are used heuristically to guide the search for promising complementary literatures. The user of the system begins by choosing a question 296 Technology Spotlight or problem area of scientific interest that can be associated with a literature, C. Elsewhere we describe and evaluate experimental computer software, which we call ARROWSMITH (Swanson & Smalheiser, 1997), that performs two separate functions that can be used independently. The first function produces a list of candidates for a second literature, A, complementary o C, from which the user can select one candidate (at a time) an input, along with C, to the second function. This first function can be considered as a computer-assisted process of problem-discovery, an issue identified in the AI literature (Langley, et al., 1987; p304-307). Alternatively, the user may wish to identify a second literature, A, as a conjecture or hypothesis generated independently of the computer-produced list of candidates. Our approach has been based on the use of article titles as a guide to identifying complementary literatures. As indicated above, our point of departure for the second function is a tentative scientific hypothesis associated with two literalxtres, A and C. A title-word search of MEDLINE is used to create two local computer title-files associated with A and C, respectively. These files are used as input to the ARROWSMITH software, which then produces a list of all words common to the two sets of titles, except for words excluded by an extensive stoplist (presently about 5000 words). The resulting list of words provides the basis for identifying title-word pathways that might provide clues to the presence of complementary arguments within the literatures corresponding to A and C. The output of this procedure is a structured titledisplay (plus journal citation), that serves as a heuristic aid to identifying word-linked titles and serves also as an organized guide to the literature.", "title": "" }, { "docid": "630715aa44ba84b2c04eb90f9465c481", "text": "The field of speech recognition is in the midst of a paradigm shift: end-to-end neural networks are challenging the dominance of hidden Markov models as a core technology. Using an attention mechanism in a recurrent encoder-decoder architecture solves the dynamic time alignment problem, allowing joint end-to-end training of the acoustic and language modeling components. In this paper we extend the end-to-end framework to encompass microphone array signal processing for noise suppression and speech enhancement within the acoustic encoding network. This allows the beamforming components to be optimized jointly within the recognition architecture to improve the end-to-end speech recognition objective. Experiments on the noisy speech benchmarks (CHiME-4 and AMI) show that our multichannel end-to-end system outperformed the attention-based baseline with input from a conventional adaptive beamformer.", "title": "" }, { "docid": "d0cf26efa730bf1044499987be149630", "text": "A Simulink model for an aircraft landing system using Lyapunov function is discussed. This paper also illustrates the concepts of the Nonlinear Energy Based Control Method (NEM) and modifying the aircraft energy to automatically land an aircraft. The aircraft dynamics are expressed in the form of the Euler- Lagrange (EL) equations of motion, which are derived from the energy functions. The structure of a controller consists of an inner loop, energy loop and a tracking loop which controls the aircraft model dynamics. Using Matlab, Simulink and S functions, perform simulation for the control structure. The system considered in this paper is a Research Civil Aircraft Model (RCAM) developed by the Group for Aeronautical Research and Technology in Europe. (GARTEUR).", "title": "" }, { "docid": "af3b81357bcb908c290e78412940e2ea", "text": "Ambient occlusion and directional (spherical harmonic) occlusion have become a staple of production rendering because they capture many visually important qualities of global illumination while being reusable across multiple artistic lighting iterations. However, ray-traced solutions for hemispherical occlusion require many rays per shading point (typically 256-1024) due to the full hemispherical angular domain. Moreover, each ray can be expensive in scenes with moderate to high geometric complexity. However, many nearby rays sample similar areas, and the final occlusion result is often low frequency. We give a frequency analysis of shadow light fields using distant illumination with a general BRDF and normal mapping, allowing us to share ray information even among complex receivers. We also present a new rotationally-invariant filter that easily handles samples spread over a large angular domain. Our method can deliver 4x speed up for scenes that are computationally bound by ray tracing costs.", "title": "" }, { "docid": "cdd0df004c24963c8ad1f405b1a3e1b0", "text": "Various parts of the human body have different movements when a person is performing different physical activities. There is a need to remotely detect human heartbeat and breathing for applications involving anti-terrorism and search-and-rescue. Ultrawideband noise radar systems are attractive because they are covert and immune from interference. The conventional time-frequency analyses of human activity are not generally applicable to nonlinear and nonstationary signals. If one can decompose the noisy baseband reflected signal and extract only the human-induced Doppler from it, the identification of various human activities becomes easier. We propose a nonstationary model to describe human motion and apply the Hilbert-Huang transform (HHT), which is adaptive to nonlinear and nonstationary signals, in order to analyze frequency characteristics of the baseband signal. When used with noise-like radar data, it is useful covertly identify specific human movement.", "title": "" }, { "docid": "97711981f9bfe4f9ba7b2070427988d4", "text": "Mathematical models have been used to provide an explicit framework for understanding malaria transmission dynamics in human population for over 100 years. With the disease still thriving and threatening to be a major source of death and disability due to changed environmental and socio-economic conditions, it is necessary to make a critical assessment of the existing models, and study their evolution and efficacy in describing the host-parasite biology. In this article, starting from the basic Ross model, the key mathematical models and their underlying features, based on their specific contributions in the understanding of spread and transmission of malaria have been discussed. The first aim of this article is to develop, starting from the basic models, a hierarchical structure of a range of deterministic models of different levels of complexity. The second objective is to elaborate, using some of the representative mathematical models, the evolution of modelling strategies to describe malaria incidence by including the critical features of host-vector-parasite interactions. Emphasis is more on the evolution of the deterministic differential equation based epidemiological compartment models with a brief discussion on data based statistical models. In this comprehensive survey, the approach has been to summarize the modelling activity in this area so that it helps reach a wider range of researchers working on epidemiology, transmission, and other aspects of malaria. This may facilitate the mathematicians to further develop suitable models in this direction relevant to the present scenario, and help the biologists and public health personnel to adopt better understanding of the modelling strategies to control the disease", "title": "" }, { "docid": "c731c1fb8a1b1a8bd6ab8b9165de5498", "text": "Video Game Software Development is a promising area of empirical research because our first observations in industry environment identified a lack of a systematic process and method support and rarely conducted/documented studies. Nevertheless, video games specific types of software products focus strongly on user interface and game design. Thus, engineering processes, methods for game construction and verification/validation, and best-practices, derived from traditional software engineering, might be applicable in context of video game development. We selected the Austrian games industry as a manageable and promising starting point for systematically capturing the state-of-the practice in Video game development. In this paper we present the survey design and report on the first results of a national survey in the Austrian games industry. The results of the survey showed that the Austrian games industry is organized in a set of small and young studios with the trend to ad-hoc and flexible development processes and limitations in systematic method support.", "title": "" }, { "docid": "3afafb908d321c6da7e0b099a8e31c40", "text": "A neural network algorithm-based system that reads handwritten ZIP codes appearing on real US mail is described. The system uses a recognition-based segmenter, that is a hybrid of connected-components analysis (CCA), vertical cuts, and a neural network recognizer. Connected components that are single digits are handled by CCA. CCs that are combined or dissected digits are handled by the vertical-cut segmenter. The four main stages of processing are preprocessing, in which noise is removed and the digits are deslanted, CCA segmentation and recognition, vertical-cut-point estimation and segmentation, and directly lookup. The system was trained and tested on approximately 10000 images, five- and nine-digit ZIP code fields taken from real mail.<<ETX>>", "title": "" }, { "docid": "941d7a7a59261fe2463f42cad9cff004", "text": "Dragon's blood is one of the renowned traditional medicines used in different cultures of world. It has got several therapeutic uses: haemostatic, antidiarrhetic, antiulcer, antimicrobial, antiviral, wound healing, antitumor, anti-inflammatory, antioxidant, etc. Besides these medicinal applications, it is used as a coloring material, varnish and also has got applications in folk magic. These red saps and resins are derived from a number of disparate taxa. Despite its wide uses, little research has been done to know about its true source, quality control and clinical applications. In this review, we have tried to overview different sources of Dragon's blood, its source wise chemical constituents and therapeutic uses. As well as, a little attempt has been done to review the techniques used for its quality control and safety.", "title": "" } ]
scidocsrr
98224d657a3655130d2a1dce131fbd45
Problem formulations and solvers in linear SVM: a review
[ { "docid": "682a288411d3c5000404f1a75c05659f", "text": "The kernel support vector machine (SVM) is one of the most widely used classification methods; however, the amount of computation required becomes the bottleneck when facing millions of samples. In this paper, we propose and analyze a novel divide-andconquer solver for kernel SVMs (DC-SVM). In the division step, we partition the kernel SVM problem into smaller subproblems by clustering the data, so that each subproblem can be solved independently and efficiently. We show theoretically that the support vectors identified by the subproblem solution are likely to be support vectors of the entire kernel SVM problem, provided that the problem is partitioned appropriately by kernel clustering. In the conquer step, the local solutions from the subproblems are used to initialize a global coordinate descent solver, which converges quickly as suggested by our analysis. By extending this idea, we develop a multilevel Divide-and-Conquer SVM algorithm with adaptive clustering and early prediction strategy, which outperforms state-of-the-art methods in terms of training speed, testing accuracy, and memory usage. As an example, on the covtype dataset with half-a-million samples, DC-SVM is 7 times faster than LIBSVM in obtaining the exact SVM solution (to within 10−6 relative error) which achieves 96.15% prediction accuracy. Moreover, with our proposed early prediction strategy, DCSVM achieves about 96% accuracy in only 12 minutes, which is more than 100 times faster than LIBSVM.", "title": "" }, { "docid": "1616d9fb3fb2b2a3c97f0bf1d36d8b79", "text": "Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.", "title": "" } ]
[ { "docid": "bc3658f75aa9af27a16ded8def1ad522", "text": "Tracking human pose in real-time is a difficult problem with many interesting applications. Existing solutions suffer from a variety of problems, especially when confronted with unusual human poses. In this paper, we derive an algorithm for tracking human pose in real-time from depth sequences based on MAP inference in a probabilistic temporal model. The key idea is to extend the iterative closest points (ICP) objective by modeling the constraint that the observed subject cannot enter free space, the area of space in front of the true range measurements. Our primary contribution is an extension to the articulated ICP algorithm that can efficiently enforce this constraint. Our experiments show that including this term improves tracking accuracy significantly. The resulting filter runs at 125 frames per second using a single desktop CPU core. We provide extensive experimental results on challenging real-world data, which show that the algorithm outperforms the previous state-of-the-art trackers both in computational efficiency and accuracy.", "title": "" }, { "docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05", "text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.", "title": "" }, { "docid": "77d0786af4c5eee510a64790af497e25", "text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.", "title": "" }, { "docid": "9f6f22e320b91838c9be8f56d3f0564d", "text": "We present an approach for ontology population from natural language English texts that extracts RDF triples according to FrameBase, a Semantic Web ontology derived from FrameNet. Processing is decoupled in two independently-tunable phases. First, text is processed by several NLP tasks, including Semantic Role Labeling (SRL), whose results are integrated in an RDF graph of mentions, i.e., snippets of text denoting some entity/fact. Then, the mention graph is processed with SPARQL-like rules using a specifically created mapping resource from NomBank/PropBank/FrameNet annotations to FrameBase concepts, producing a knowledge graph whose content is linked to DBpedia and organized around semantic frames, i.e., prototypical descriptions of events and situations. A single RDF/OWL representation is used where each triple is related to the mentions/tools it comes from. We implemented the approach in PIKES, an open source tool that combines two complementary SRL systems and provides a working online demo. We evaluated PIKES on a manually annotated gold standard, assessing precision/recall in (i) populating FrameBase ontology, and (ii) extracting semantic frames modeled after standard predicate models, for comparison with state-of-the-art tools for the Semantic Web. We also evaluated (iii) sampled precision and execution times on a large corpus of 110 K Wikipedia-like pages.", "title": "" }, { "docid": "045a4622691d1ae85593abccb823b193", "text": "The capability of Corynebacterium glutamicum for glucose-based synthesis of itaconate was explored, which can serve as building block for production of polymers, chemicals, and fuels. C. glutamicum was highly tolerant to itaconate and did not metabolize it. Expression of the Aspergillus terreus CAD1 gene encoding cis-aconitate decarboxylase (CAD) in strain ATCC13032 led to the production of 1.4mM itaconate in the stationary growth phase. Fusion of CAD with the Escherichia coli maltose-binding protein increased its activity and the itaconate titer more than two-fold. Nitrogen-limited growth conditions boosted CAD activity and itaconate titer about 10-fold to values of 1440 mU mg(-1) and 30 mM. Reduction of isocitrate dehydrogenase activity via exchange of the ATG start codon to GTG or TTG resulted in maximal itaconate titers of 60 mM (7.8 g l(-1)), a molar yield of 0.4 mol mol(-1), and a volumetric productivity of 2.1 mmol l(-1) h(-1).", "title": "" }, { "docid": "8c026a368fcf73d6f6bdac66e8f6a603", "text": "In this paper, a novel reconfigurable open slot antenna has been proposed for LTE smartphone applications to cover a wide bandwidth of 698–960 and 1710–2690 MHz. The antenna is located at the bottom portion of the mobile phone and is integrated with metal rim, thereby occupying a small space and providing mechanical stability to the mobile phone. Varactor diode is used to cover the lower band frequencies, so as to achieve a good frequency coverage and antenna miniaturization. The operational principles of the antenna are studied and the final design is optimized, fabricated, and tested. It has achieved the desired impedance bandwidth and the total efficiency of minimum 50% in free space throughout the required bands. The antenna performance with mobile phone components and human hand is also been studied. Furthermore, the SAR in a human head is investigated and is found to be within allowable SAR limits. Finally a multiple-input multiple-output antenna configuration with high isolation is proposed; it has an identical reconfigurable open slot antenna integrated at the top edge of the mobile phone acting as the secondary antenna for 698–960 and 1710–2690 MHz. Thus the proposed antenna is an excellent candidate for LTE smartphones and mobile devices.", "title": "" }, { "docid": "9dc427deaa9cf0b9541ed3f2cca0892c", "text": "One of the major challenges in the field of Natural Language Processing (NLP) is the handling of idioms; seemingly ordinary phrases which could be further conjugated or even spread across the sentence to fit the context. Since idioms are a part of natural language, the ability to tackle them brings us closer to creating efficient NLP tools. This paper presents a multilingual parallel idiom dataset for seven Indian languages in addition to English and demonstrates its usefulness for two NLP applications Machine Translation and Sentiment Analysis. We observe significant improvement for both the subtasks over baseline models trained without employing the idiom dataset.", "title": "" }, { "docid": "0a30e4de94a63b9866183ade4204ecd0", "text": "Pharyngodon medinae García-Calvente, 1948 (Nematoda: Pharyngodonidae) is redescribed from Podarcis pityusensis (Bosca, 1883) (Sauria: Lacertidae) of the Balearic Islands (Spain) and confirmed as a member of the genus Skrjabinodon Inglis, 1968. A systematic review of S. medinae and closely related species is also given. Parathelandros canariensis is referred to Skrjabinodon as a new combination and Parathelandros Magzoub et al., 1980 is dismissed as a junior homonym of Parathelandros Baylis, 1930.", "title": "" }, { "docid": "c215a497d39f4f95a9fc720debb14b05", "text": "Adding frequency reconfigurability to a compact metamaterial-inspired antenna is investigated. The antenna is a printed monopole with an incorporated slot and is fed by a coplanar waveguide (CPW) line. This antenna was originally inspired from the concept of negative-refractive-index metamaterial transmission lines and exhibits a dual-band behavior. By using a varactor diode, the lower band (narrowband) of the antenna, which is due to radiation from the incorporated slot, can be tuned over a broad frequency range, while the higher band (broadband) remains effectively constant. A detailed equivalent circuit model is developed that predicts the frequency-tuning behavior for the lower band of the antenna. The circuit model shows the involvement of both CPW even and odd modes in the operation of the antenna. Experimental results show that, for a varactor diode capacitance approximately ranging from 0.1-0.7 pF, a tuning range of 1.6-2.23 GHz is achieved. The size of the antenna at the maximum frequency is 0.056 λ0 × 0.047 λ0 and the antenna is placed over a 0.237 λ0 × 0.111 λ0 CPW ground plane (λ0 being the wavelength in vacuum).", "title": "" }, { "docid": "62d93b9bcc66f402cd045f8586b0b62f", "text": "Passive crossbar resistive random access memory (RRAM) arrays require select devices with nonlinear I-V characteristics to address the sneak-path problem. Here, we present a systematical analysis to evaluate the performance requirements of select devices during the read operation of RRAM arrays for the proposed one-selector-one-resistor (1S1R) configuration with serially connected selector/storage element. We found high selector current density is critical and the selector nonlinearity (ON/OFF) requirement can be relaxed at present. Different read schemes were analyzed to achieve high read margin and low power consumption. Design optimizations of the sense resistance and the storage elements are also discussed.", "title": "" }, { "docid": "bfbca1007aff8f95e843e5530a833fb9", "text": "Airborne wind energy systems aim to generate renewable energy by means of the aerodynamic lift produced using a wing tethered to the ground and controlled to fly crosswind paths. The problem of maximizing the average power developed by the generator, in the presence of limited information on wind speed and direction, is considered. At constant tether speed operation, the power is related to the traction force generated by the wing. First, a study of the traction force is presented for a general path parametrization. In particular, the sensitivity of the traction force on the path parameters is analyzed. Then, the results of this analysis are exploited to design an algorithm to maximize the force, hence the power, in real-time. The algorithm uses only the measured traction force on the tether and the wing's position, and it is able to adapt the system's operation to maximize the average force with uncertain and time-varying wind. The influence of inaccurate sensor readings and turbulent wind are also discussed. The presented algorithm is not dependent on a specific hardware setup and can act as an extension of existing control structures. Both numerical simulations and experimental results are presented to highlight the effectiveness of the approach.", "title": "" }, { "docid": "b8429bf4b9fd7f331453234736d68b91", "text": "We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design. The discrepancy between the minimax and maximin objective values could serve as a proxy for the difficulties that the alternating gradient descent encounters in the optimization of GANs. In this work, we give new results on the benefits of multi-generator architecture of GANs. We show that the minimax gap shrinks to as the number of generators increases with rate Õ(1/ ). This improves over the best-known result of Õ(1/ ). At the core of our techniques is a novel application of Shapley-Folkman lemma to the generic minimax problem, where in the literature the technique was only known to work when the objective function is restricted to the Lagrangian function of a constraint optimization problem. Our proposed Stackelberg GAN performs well experimentally in both synthetic and real-world datasets, improving Fréchet Inception Distance by 14.61% over the previous multi-generator GANs on the benchmark datasets.", "title": "" }, { "docid": "36f7c1f48ff5f34bcce15e91c0713466", "text": "Many distributed multimedia applications rely on video analysis algorithms for automated video and image processing. Little is known, however, about the minimum video quality required to ensure an accurate performance of these algorithms. In an attempt to understand these requirements, we focus on a set of commonly used face analysis algorithms. Using standard datasets and live videos, we conducted experiments demonstrating that the algorithms show almost no decrease in accuracy until the input video is reduced to a certain critical quality, which amounts to significantly lower bitrate compared to the quality commonly acceptable for human vision. Since computer vision percepts video differently than human vision, existing video quality metrics, designed for human perception, cannot be used to reason about the effects of video quality reduction on accuracy of video analysis algorithms. We therefore investigate two alternate video quality metrics, blockiness and mutual information, and show how they can be used to estimate the critical video qualities for face analysis algorithms.", "title": "" }, { "docid": "7cb61609adf6e3c56c762d6fe322903c", "text": "In this paper, we give an overview of the BitBlaze project, a new approach to computer security via binary analysis. In particular, BitBlaze focuses on building a unified binary analysis platform and using it to provide novel solutions to a broad spectrum of different security problems. The binary analysis platform is designed to enable accurate analysis, provide an extensible architecture, and combines static and dynamic analysis as well as program verification techniques to satisfy the common needs of security applications. By extracting security-related properties from binary programs directly, BitBlaze enables a principled, root-cause based approach to computer security, offering novel and effective solutions, as demonstrated with over a dozen different security applications.", "title": "" }, { "docid": "3fa16d5e442bc4a2398ba746d6aaddfe", "text": "Although many users create predictable passwords, the extent to which users realize these passwords are predictable is not well understood. We investigate the relationship between users' perceptions of the strength of specific passwords and their actual strength. In this 165-participant online study, we ask participants to rate the comparative security of carefully juxtaposed pairs of passwords, as well as the security and memorability of both existing passwords and common password-creation strategies. Participants had serious misconceptions about the impact of basing passwords on common phrases and including digits and keyboard patterns in passwords. However, in most other cases, participants' perceptions of what characteristics make a password secure were consistent with the performance of current password-cracking tools. We find large variance in participants' understanding of how passwords may be attacked, potentially explaining why users nonetheless make predictable passwords. We conclude with design directions for helping users make better passwords.", "title": "" }, { "docid": "2b688f9ca05c2a79f896e3fee927cc0d", "text": "This paper presents a new synchronous-reference frame (SRF)-based control method to compensate power-quality (PQ) problems through a three-phase four-wire unified PQ conditioner (UPQC) under unbalanced and distorted load conditions. The proposed UPQC system can improve the power quality at the point of common coupling on power distribution systems under unbalanced and distorted load conditions. The simulation results based on Matlab/Simulink are discussed in detail to support the SRF-based control method presented in this paper. The proposed approach is also validated through experimental study with the UPQC hardware prototype.", "title": "" }, { "docid": "5f941adae33e1433ebaeeb2dbb69e6ca", "text": "Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling also suffers from high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.", "title": "" }, { "docid": "329259263340b063bfad7bc34f5d376a", "text": "We analyze the problem of disparate impact in credit scoring and evaluate three approaches to identifying and correcting the problem, namely: 1) post-development univariate test with variable elimination, 2) postdevelopment multivariate test with variable elimination, 3) control variable approach with coefficient adjustment. The third approach is a new innovation developed by the authors. Results are illustrated with simulation data calibrated to actual distributions of typical variables used in score development. Results show that controlling disparate impact by eliminating variables may have unintended and undesirable consequences.", "title": "" }, { "docid": "9b7fb16ad573aecd15350aa5f6a310c6", "text": "A general analysis and design procedure is developed for the asymmetrical multisection power divider with arbitrary power division ratio and arbitrary specifications of input and output impedance matching over any desired frequency bandwidth. The even- and odd-mode analysis, which was previously applied to the design of multisection Gysel power dividers, required that the unequal power division ratios be accompanied with appropriately proportional output impedances. This requirement is relaxed here. The equivalent circuits are first obtained for the divider and then their scattering parameters are determined. Some error functions are then constructed by the method of least squares. Their minimization determines the geometrical dimensions of the optimum divider. An approximate method based on the even and odd modes is developed for its initial design of the divider. Two examples of single- and double-section dividers are designed. Their frequency responses of isolation and transmission coefficients are obtained by the proposed method, HFSS software, fabrication, and measurement. They agree within the approximate assumptions. A two-section and two-way power divider is designed and fabricated by the proposed method for the case of unequal port impedances in the L-band. The measured isolation between the outputs is better than -22 dB in 44% of the band.", "title": "" }, { "docid": "cca94491276328a03e0a56e7460bf50f", "text": "Because of large amounts of unstructured data generated on the Internet, entity relation extraction is believed to have high commercial value. Entity relation extraction is a case of information extraction and it is based on entity recognition. This paper firstly gives a brief overview of relation extraction. On the basis of reviewing the history of relation extraction, the research status of relation extraction is analyzed. Then the paper divides theses research into three categories: supervised machine learning methods, semi-supervised machine learning methods and unsupervised machine learning method, and toward to the deep learning direction.", "title": "" } ]
scidocsrr
2cdf3c44fda94dbce085794fc254d176
From macro- to microplastics - Analysis of EU regulation along the life cycle of plastic bags.
[ { "docid": "984b2f763a14331c5da36cd08f7482de", "text": "This review of 68 studies compares the methodologies used for the identification and quantification of microplastics from the marine environment. Three main sampling strategies were identified: selective, volume-reduced, and bulk sampling. Most sediment samples came from sandy beaches at the high tide line, and most seawater samples were taken at the sea surface using neuston nets. Four steps were distinguished during sample processing: density separation, filtration, sieving, and visual sorting of microplastics. Visual sorting was one of the most commonly used methods for the identification of microplastics (using type, shape, degradation stage, and color as criteria). Chemical and physical characteristics (e.g., specific density) were also used. The most reliable method to identify the chemical composition of microplastics is by infrared spectroscopy. Most studies reported that plastic fragments were polyethylene and polypropylene polymers. Units commonly used for abundance estimates are \"items per m(2)\" for sediment and sea surface studies and \"items per m(3)\" for water column studies. Mesh size of sieves and filters used during sampling or sample processing influence abundance estimates. Most studies reported two main size ranges of microplastics: (i) 500 μm-5 mm, which are retained by a 500 μm sieve/net, and (ii) 1-500 μm, or fractions thereof that are retained on filters. We recommend that future programs of monitoring continue to distinguish these size fractions, but we suggest standardized sampling procedures which allow the spatiotemporal comparison of microplastic abundance across marine environments.", "title": "" } ]
[ { "docid": "d512c7809118ddb41be00b6070991395", "text": "In this paper we present TroFi (Trope Finder), a system for automatically classifying literal and nonliteral usages of verbs through nearly unsupervised word-sense disambiguation and clustering techniques. TroFi uses sentential context instead of selectional constraint violations or paths in semantic hierarchies. It also uses literal and nonliteral seed sets acquired and cleaned without human supervision in order to bootstrap learning. We adapt a word-sense disambiguation algorithm to our task and augment it with multiple seed set learners, a voting schema, and additional features like SuperTags and extrasentential context. Detailed experiments on hand-annotated data show that our enhanced algorithm outperforms the baseline by 24.4%. Using the TroFi algorithm, we also build the TroFi Example Base, an extensible resource of annotated literal/nonliteral examples which is freely available to the NLP research community.", "title": "" }, { "docid": "7ebff2391401cef25b27d510675e9acd", "text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.", "title": "" }, { "docid": "2b8aa68835bc61f3d0b5da39441185c9", "text": "This position paper explores the threat to individual privacy due to the widespread use of consumer drones. Present day consumer drones are equipped with sensors such as cameras and microphones, and their types and numbers can be well expected to increase in future. Drone operators have absolute control on where the drones fly and what the on-board sensors record with no options for bystanders to protect their privacy. This position paper proposes a policy language that allows homeowners, businesses, governments, and privacy-conscious individuals to specify location access-control for drones, and discusses how these policy-based controls might be realized in practice. This position paper also explores the potential future problem of managing consumer drone traffic that is likely to emerge with increasing use of consumer drones for various tasks. It proposes a privacy preserving traffic management protocol for directing drones towards their respective destinations without requiring drones to reveal their destinations.", "title": "" }, { "docid": "fc1baaeb129ace3a6e76d447b3199bd2", "text": "Many computer vision problems can be formulated in a Bayesian framework based on Markov random fields (MRF) or conditional random fields (CRF). Generally, the MRF/CRF model is learned independently of the inference algorithm that is used to obtain the final result. In this paper, we observe considerable gains in speed and accuracy by training the MRF/CRF model together with a fast and suboptimal inference algorithm. An active random field (ARF) is defined as a combination of a MRF/CRF based model and a fast inference algorithm for the MRF/CRF model. This combination is trained through an optimization of a loss function and a training set consisting of pairs of input images and desired outputs. We apply the ARF concept to image denoising, using the Fields of Experts MRF together with a 1-4 iteration gradient descent algorithm for inference. Experimental validation on unseen data shows that the ARF approach obtains an improved benchmark performance as well as a 1000-3000 times speedup compared to the Fields of Experts MRF. Using the ARF approach, image denoising can be performed in real-time, at 8 fps on a single CPU for a 256times256 image sequence, with close to state-of-the-art accuracy.", "title": "" }, { "docid": "6f16ccc24022c4fc46f8b0b106b0f3c4", "text": "We reviewed 25 patients ascertained through the finding of trigonocephaly/metopic synostosis as a prominent manifestation. In 16 patients, trigonocephaly/metopic synostosis was the only significant finding (64%); 2 patients had metopic/sagittal synostosis (8%) and in 7 patients the trigonocephaly was part of a syndrome (28%). Among the nonsyndromic cases, 12 were males and 6 were females and the sex ratio was 2 M:1 F. Only one patient with isolated trigonocephaly had an affected parent (5.6%). All nonsyndromic patients had normal psychomotor development. In 2 patients with isolated metopic/sagittal synostosis, FGFR2 and FGFR3 mutations were studied and none were detected. Among the syndromic cases, two had Jacobsen syndrome associated with deletion of chromosome 11q 23 (28.5%). Of the remaining five syndromic cases, different conditions were found including Say-Meyer syndrome, multiple congenital anomalies and bilateral retinoblastoma with no detectable deletion in chromosome 13q14.2 by G-banding chromosomal analysis and FISH, I-cell disease, a new acrocraniofacial dysostosis syndrome, and Opitz C trigonocephaly syndrome. The last two patients were studied for cryptic chromosomal rearrangements, with SKY and subtelomeric FISH probes. Also FGFR2 and FGFR3 mutations were studied in two syndromic cases, but none were found. This study demonstrates that the majority of cases with nonsyndromic trigonocephaly are sporadic and benign, apart from the associated cosmetic implications. Syndromic trigonocephaly cases are causally heterogeneous and associated with chromosomal as well as single gene disorders. An investigation to delineate the underlying cause of trigonocephaly is indicated because of its important implications on medical management for the patient and the reproductive plans for the family.", "title": "" }, { "docid": "9ed3d4d48c06cefe9c920e82dbacf9d9", "text": "STUDY OBJECTIVE\nEmergency department (ED) crowding is a prevalent health delivery problem and may adversely affect the outcomes of patients requiring admission. We assess the association of ED crowding with subsequent outcomes in a general population of hospitalized patients.\n\n\nMETHODS\nWe performed a retrospective cohort analysis of patients admitted in 2007 through the EDs of nonfederal, acute care hospitals in California. The primary outcome was inpatient mortality. Secondary outcomes included hospital length of stay and costs. ED crowding was established by the proxy measure of ambulance diversion hours on the day of admission. To control for hospital-level confounders of ambulance diversion, we defined periods of high ED crowding as those days within the top quartile of diversion hours for a specific facility. Hierarchic regression models controlled for demographics, time variables, patient comorbidities, primary diagnosis, and hospital fixed effects. We used bootstrap sampling to estimate excess outcomes attributable to ED crowding.\n\n\nRESULTS\nWe studied 995,379 ED visits resulting in admission to 187 hospitals. Patients who were admitted on days with high ED crowding experienced 5% greater odds of inpatient death (95% confidence interval [CI] 2% to 8%), 0.8% longer hospital length of stay (95% CI 0.5% to 1%), and 1% increased costs per admission (95% CI 0.7% to 2%). Excess outcomes attributable to periods of high ED crowding included 300 inpatient deaths (95% CI 200 to 500 inpatient deaths), 6,200 hospital days (95% CI 2,800 to 8,900 hospital days), and $17 million (95% CI $11 to $23 million) in costs.\n\n\nCONCLUSION\nPeriods of high ED crowding were associated with increased inpatient mortality and modest increases in length of stay and costs for admitted patients.", "title": "" }, { "docid": "6a143e9aab34836fc34ffcd6cc9d1096", "text": "MOTIVATION\nDNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory due to the lack of a systematic framework that can accommodate noise, variability, and low replication often typical of microarray data.\n\n\nRESULTS\nWe develop a Bayesian probabilistic framework for microarray data analysis. At the simplest level, we model log-expression values by independent normal distributions, parameterized by corresponding means and variances with hierarchical prior distributions. We derive point estimates for both parameters and hyperparameters, and regularized expressions for the variance of each gene by combining the empirical variance with a local background variance associated with neighboring genes. An additional hyperparameter, inversely related to the number of empirical observations, determines the strength of the background variance. Simulations show that these point estimates, combined with a t -test, provide a systematic inference approach that compares favorably with simple t -test or fold methods, and partly compensate for the lack of replication.", "title": "" }, { "docid": "a65930b1f31421bb4222933a36ac93c7", "text": "Personalized nutrition is fast becoming a reality due to a number of technological, scientific, and societal developments that complement and extend current public health nutrition recommendations. Personalized nutrition tailors dietary recommendations to specific biological requirements on the basis of a person's health status and goals. The biology underpinning these recommendations is complex, and thus any recommendations must account for multiple biological processes and subprocesses occurring in various tissues and must be formed with an appreciation for how these processes interact with dietary nutrients and environmental factors. Therefore, a systems biology-based approach that considers the most relevant interacting biological mechanisms is necessary to formulate the best recommendations to help people meet their wellness goals. Here, the concept of \"systems flexibility\" is introduced to personalized nutrition biology. Systems flexibility allows the real-time evaluation of metabolism and other processes that maintain homeostasis following an environmental challenge, thereby enabling the formulation of personalized recommendations. Examples in the area of macro- and micronutrients are reviewed. Genetic variations and performance goals are integrated into this systems approach to provide a strategy for a balanced evaluation and an introduction to personalized nutrition. Finally, modeling approaches that combine personalized diagnosis and nutritional intervention into practice are reviewed.", "title": "" }, { "docid": "1d21b7855d2585ad260859f76ac7a28b", "text": "This work describes a new kind of soft gripper. Inspiration on nature is usual in Soft Robotics device creation. In this case, the inspiration came from a sea lamprey in order to create a closed structure soft robotic gripper. Usually the grippers have one or more soft actuators that act like fingers. Since it is difficult to know whether the fingers are really grasping an object, it is being proposed a closed structure to deal with this problem. While the proposed gripper involves the object as whole, a homogeneous force is applied on it. In addition to the details of the closed structure actuator manufacture description, two procedure tests are presented: an analysis of load versus pressure function and of the force versus pressure characteristics. At the end, some new research topics to the proposed soft gripper are discussed.", "title": "" }, { "docid": "2ea3d39abcc1287d36c01bc20079ac69", "text": "Speeded-Up Robust Features (SURF) is a robust and useful feature detector for various vision-based applications but it is unable to detect symmetrical objects. This paper proposes a new symmetrical SURF descriptor to enrich the power of SURF to detect all possible symmetrical matching pairs through a mirroring transformation. A vehicle make and model recognition (MMR) application is then adopted to prove the practicability and feasibility of the method. To detect vehicles from the road, the proposed symmetrical descriptor is first applied to determine the region of interest of each vehicle from the road without using any motion features. This scheme provides two advantages: there is no need for background subtraction and it is extremely efficient for real-time applications. Two MMR challenges, namely multiplicity and ambiguity problems, are then addressed. The multiplicity problem stems from one vehicle model often having different model shapes on the road. The ambiguity problem results from vehicles from different companies often sharing similar shapes. To address these two problems, a grid division scheme is proposed to separate a vehicle into several grids; different weak classifiers that are trained on these grids are then integrated to build a strong ensemble classifier. The histogram of gradient and SURF descriptors are adopted to train the weak classifiers through a support vector machine learning algorithm. Because of the rich representation power of the grid-based method and the high accuracy of vehicle detection, the ensemble classifier can accurately recognize each vehicle.", "title": "" }, { "docid": "19ebb5c0cdf90bf5aef36ad4b9f621a1", "text": "There has been a dramatic increase in the number and complexity of new ventilation modes over the last 30 years. The impetus for this has been the desire to improve the safety, efficiency, and synchrony of ventilator-patient interaction. Unfortunately, the proliferation of names for ventilation modes has made understanding mode capabilities problematic. New modes are generally based on increasingly sophisticated closed-loop control systems or targeting schemes. We describe the 6 basic targeting schemes used in commercially available ventilators today: set-point, dual, servo, adaptive, optimal, and intelligent. These control systems are designed to serve the 3 primary goals of mechanical ventilation: safety, comfort, and liberation. The basic operations of these schemes may be understood by clinicians without any engineering background, and they provide the basis for understanding the wide variety of ventilation modes and their relative advantages for improving patient-ventilator synchrony. Conversely, their descriptions may provide engineers with a means to better communicate to end users.", "title": "" }, { "docid": "d3125a81fea90a5ed3181843060a66cf", "text": "We propose several nonparametric predictors of the mid-price in a limit order book, based on different features constructed from the order book data observed contemporaneously. contemporaneously and in the recent past. We evaluate our predictors in the context of an order execution task by constructing order execution strategies that incorporate these predictors. In our evaluations, we use a large dataset of historical order placements, cancellations, and trades over a five-month period from 2013 to 2014 for liquid stocks traded on NASDAQ. We show that some of the features achieve statistically significant improvements compared to some standard strategies that do not incorporate price forecasting. For the two features that achieve the best performance, the trading cost improvement is on the order of one basis point, which can be economically very significant for asset managers with large portfolio turnovers and for brokers with considerable trading volumes.", "title": "" }, { "docid": "785cb08c500aea1ead360138430ba018", "text": "A recent “third wave” of neural network (NN) approaches now delivers state-of-the-art performance in many machine learning tasks, spanning speech recognition, computer vision, and natural language processing. Because these modern NNs often comprise multiple interconnected layers, work in this area is often referred to as deep learning. Recent years have witnessed an explosive growth of research into NN-based approaches to information retrieval (IR). A significant body of work has now been created. In this paper, we survey the current landscape of Neural IR research, paying special attention to the use of learned distributed representations of textual units. We highlight the successes of neural IR thus far, catalog obstacles to its wider adoption, and suggest potentially promising directions for future research.", "title": "" }, { "docid": "f3a1789e765ea0325a3b31e0b436543d", "text": "Medical care is vital and challenging task as the amount of unstructured and unformalized data has grown dramatically over last decades. The article is dedicated to SMDA project an attempt to build a framework for semantic medicine application for Almazov medical research center, FANW MRC. In this paper we investigate modern approaches to medical textual data processing and analysis, however mentioned approaches do not give a complete background for solving our task. We spot a process as a combination of existing tools as well as our heuristic algorithms, techniques and tools. The paper proposes a new approach to natural language processing and concept extraction applied to medical certificates, doctors’ notes and patients’ diaries. The main purpose of the article is to present a way to solve a particular problem of medical concept extraction and knowledge formalization from an unstructured, lacking in syntax and noisy text.", "title": "" }, { "docid": "d2454e1236b51349c06b67f8a807b319", "text": "This paper investigates capabilities of social media, such as Facebook, Twitter, Delicious, Digg and others, for their current and potential impact on the supply chain. In particular, this paper examines the use of social media to capture the impact on supply chain events and develop a context for those events. This paper also analyzes the use of social media in the supply chain to build relationships among supply chain participants. Further, this paper investigates the of use user supplied tags as a basis of evaluating and extending an ontology for supply chains. In addition, using knowledge discovery from social media, a number of concepts related to the supply chain are examined, including supply chain reputation and influence within the supply chain. Prediction markets are analyzed for their potential use in supply chains. Finally, this paper investigates the integration of traditional knowledge management along with knowledge generated from social media.", "title": "" }, { "docid": "5e2eee141595ae58ca69ee694dc51c8a", "text": "Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations.", "title": "" }, { "docid": "72bc2130b650ec95c459507eb1159323", "text": "Prior work has identified several optimal algorithms for scheduling independent, implicit-deadline sporadic (or periodic) real-time tasks on identical multiprocessors. These algorithms, however, are subject to high conceptual complexity and typically incur considerable runtime overheads. This paper establishes that, empirically, near-optimal schedulability can also be achieved with a far simpler approach that combines three well-known techniques (reservations, semi-partitioned scheduling, and period transformation) with some novel task-placement heuristics.In large-scale schedulability experiments, the proposed approach is shown to achieve near-optimal hard real-time schedulability (99+% schedulable utilization) across a wide range of processor and task counts. With an implementation in LITMUSRT, the proposed approach is shown to be practical and to incur only low runtime overheads, comparable to a conventional partitioned scheduler. It is further shown that basic slack management techniques can help to avoid more than 50% of all migrations of semi-partitioned reservations if tasks execute on average for less than their provisioned worst-case execution time.Two main conclusions are drawn: pragmatically speaking, global scheduling is not required to support static workloads of independent, implicit-deadline sporadic (or periodic) tasks; and since such simple workloads are well supported, future research on multiprocessor real-time scheduling should consider more challenging workloads (e.g., adaptive workloads, dynamic task arrivals or mode changes, shared resources, precedence constraints, etc.).", "title": "" }, { "docid": "1c931bd85e8985fcdabc0f7b20a1b2ac", "text": "This paper presents a power factor correction (PFC)-based bridgeless Luo (BL-Luo) converter-fed brushless dc (BLDC) motor drive. A single voltage sensor is used for the speed control of the BLDC motor and PFC at ac mains. The voltage follower control is used for a BL-Luo converter operating in discontinuous inductor current mode. The speed of the BLDC motor is controlled by an approach of variable dc-link voltage, which allows a low-frequency switching of the voltage source inverter for the electronic commutation of the BLDC motor, thus offering reduced switching losses. The proposed BLDC motor drive is designed to operate over a wide range of speed control with an improved power quality at ac mains. The power quality indices thus obtained are under the recommended limits of IEC 61000-3-2. The performance of the proposed drive is validated with test results obtained on a developed prototype of the drive.", "title": "" }, { "docid": "019c2d5927e54ae8ce3fc7c5b8cff091", "text": "In this paper, we present Affivir, a video browsing system that recommends Internet videos that match a user’s affective preference. Affivir models a user’s watching behavior as sessions, and dynamically adjusts session parameters to cater to the user’s current mood. In each session, Affivir discovers a user’s affective preference through user interactions, such as watching or skipping videos. Affivir uses video affective features (motion, shot change rate, sound energy, and audio pitch average) to retrieve videos that have similar affective responses. To efficiently search videos of interest from our video repository, all videos in the repository are pre-processed and clustered. Our experimental results shows that Affivir has made a significant improvement in user satisfaction and enjoyment, compared with several other popular baseline approaches.", "title": "" }, { "docid": "4d12a470a2f678142091dd5232050235", "text": "Learning a deep model from small data is yet an opening and challenging problem. We focus on one-shot classification by deep learning approach based on a small quantity of training samples. We proposed a novel deep learning approach named Local Contrast Learning (LCL) based on the key insight about a human cognitive behavior that human recognizes the objects in a specific context by contrasting the objects in the context or in her/his memory. LCL is used to train a deep model that can contrast the recognizing sample with a couple of contrastive samples randomly drawn and shuffled. On one-shot classification task on Omniglot, the deep model based LCL with 122 layers and 1.94 millions of parameters, which was trained on a tiny dataset with only 60 classes and 20 samples per class, achieved the accuracy 97.99% that outperforms human and state-of-the-art established by Bayesian Program Learning (BPL) trained on 964 classes. LCL is a fundamental idea which can be applied to alleviate parametric model’s overfitting resulted by lack of training samples.", "title": "" } ]
scidocsrr
52b5a1f0f8578ef0ff5126328502e22a
Resonant Magnetic Field Sensors Based On MEMS Technology
[ { "docid": "d969dfa0584101410fd2868f8de918bb", "text": "Although fluxgates may have resolution of 50 pT and absolute precission of 1 nT, their accuracy is often degraded by crossfield response, non-linearities, hysteresis and perming effects. The trends are miniaturization, lower power consumption and production cost, non-linear tuning and digital processing. New core shapes and signal processing methods have been suggested.", "title": "" } ]
[ { "docid": "554d0255aef7ffac9e923da5d93b97e3", "text": "In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems. As syntactically complex sentences often pose a challenge for current Open RE approaches, we have developed a simplification framework that performs a pre-processing step by taking a single sentence as input and using a set of syntactic-based transformation rules to create a textual input that is easier to process for subsequently applied Open RE systems.", "title": "" }, { "docid": "9a2d79d9df9e596e26f8481697833041", "text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.", "title": "" }, { "docid": "e1b536458ddc8603b281bac69e6bd2e8", "text": "We present highly integrated sensor-actuator-controller units (SAC units), addressing the increasing need for easy to use components in the design of modern high-performance robotic systems. Following strict design principles and an electro-mechanical co-design from the beginning on, our development resulted in highly integrated SAC units. Each SAC unit includes a motor, a gear unit, an IMU, sensors for torque, position and temperature as well as all necessary embedded electronics for control and communication over a high-speed EtherCAT bus. Key design considerations were easy to use interfaces and a robust cabling system. Using slip rings to electrically connect the input and output side, the units allow continuous rotation even when chained along a robotic arm. The experimental validation shows the potential of the new SAC units regarding the design of humanoid robots.", "title": "" }, { "docid": "1ff8d3270f4884ca9a9c3d875bdf1227", "text": "This paper addresses the challenging problem of perceiving the hidden or occluded geometry of the scene depicted in any given RGBD image. Unlike other image labeling problems such as image segmentation where each pixel needs to be assigned a single label, layered decomposition requires us to assign multiple labels to pixels. We propose a novel \"Occlusion-CRF\" model that allows for the integration of sophisticated priors to regularize the solution space and enables the automatic inference of the layer decomposition. We use a generalization of the Fusion Move algorithm to perform Maximum a Posterior (MAP) inference on the model that can handle the large label sets needed to represent multiple surface assignments to each pixel. We have evaluated the proposed model and the inference algorithm on many RGBD images of cluttered indoor scenes. Our experiments show that not only is our model able to explain occlusions but it also enables automatic inpainting of occluded/ invisible surfaces.", "title": "" }, { "docid": "5f89dba01f03d4e7fbb2baa8877e0dff", "text": "The basic aim of a biometric identification system is to discriminate automatically between subjects in a reliable and dependable way, according to a specific-target application. Multimodal biometric identification systems aim to fuse two or more physical or behavioral traits to provide optimal False Acceptance Rate (FAR) and False Rejection Rate (FRR), thus improving system accuracy and dependability. In this paper, an innovative multimodal biometric identification system based on iris and fingerprint traits is proposed. The paper is a state-of-the-art advancement of multibiometrics, offering an innovative perspective on features fusion. In greater detail, a frequency-based approach results in a homogeneous biometric vector, integrating iris and fingerprint data. Successively, a hamming-distance-based matching algorithm deals with the unified homogenous biometric vector. The proposed multimodal system achieves interesting results with several commonly used databases. For example, we have obtained an interesting working point with FAR = 0% and FRR = 5.71% using the entire fingerprint verification competition (FVC) 2002 DB2B database and a randomly extracted same-size subset of the BATH database. At the same time, considering the BATH database and the FVC2002 DB2A database, we have obtained a further interesting working point with FAR = 0% and FRR = 7.28% ÷ 9.7%.", "title": "" }, { "docid": "78a9a124a2a0da962088db14bd417efa", "text": "he quickening pace of MOSFET technology scaling, as seen in the new 2001 International Technology Roadmap for Semiconductors [1], is accelerating the introduction of many new technologies to extend CMOS into nanoscale MOSFET structures heretofore not thought possible. A cautious optimism is emerging that these new technologies may extend MOSFETs to the 22-nm node (9-nm physical gate length) by 2016 if not by the end of this decade. These new devices likely will feature several new materials cleverly incorporated into new nonbulk MOSFET structures. They will be ultra fast and dense with a voracious appetite for power. Intrinsic device speeds may be more than 1 THz and integration densities will exceed 1 billion transistors per cm. Excessive power consumption, however, will demand judicious use of these high-performance devices only in those critical paths requiring their superior performance. Two or perhaps three other lower performance, more power-efficient MOSFETs will likely be used to perform less performance-critical functions on the chip to manage the total power consumption. Beyond CMOS, several completely new approaches to information-processing and data-storage technologies and architectures are emerging to address the timeframe beyond the current roadmap. Rather than vying to “replace” CMOS, one or more of these embryonic paradigms, when combined with a CMOS platform, could extend microelectronics to new applications domains currently not accessible to CMOS. A successful new information-processing paradigm most likely will require a new platform technology embodying a fabric of interconnected primitive logic cells, perhaps in three dimensions. Further, this new logic paradigm may suggest a new symbiotic information-processing architecture to fully extract the potential offered by the logic fabric. An excellent summary of nanoelectronic devices is contained in the Technology Roadmap for Nanoelectronics, produced by the European Commission’s Information Society Technology Programme (Future and Emerging Technologies)[2]. The goal of this article is to introduce and review many new device technologies and concepts for information and signal processing having potential to extend microelectronics to and beyond the time frame of the new 2001 ITRS. The scope of this article is to “cast a broad net” to gather in one place substantive, alternative concepts for memory, logic, and information-processing architectures that would, if successful, substantially extend the time frame of the ITRS beyond CMOS. As such, this section will provide a window into candidate approaches. Provision of in-depth, critical analysis of each approach will be quite important but is beyond the scope of this article.", "title": "" }, { "docid": "0743a084a2fbacab046b3e0420d74443", "text": "Quick Response (QR) codes are two dimensional barcodes that can be used to efficiently store small amount of data. They are increasingly used in all life fields, especially with the wide spread of smart phones which are used as QR code scanners. While QR codes have many advantages that make them very popular, there are several security issues and risks that are associated with them. Running malicious code, stealing users' sensitive information and violating their privacy and identity theft are some typical security risks that a user might be subject to in the background while he/she is just reading the QR code in the foreground. In this paper, a security system for QR codes that guarantees both users and generators security concerns is implemented. The system is backward compatible with current standard used for encoding QR codes. The system is implemented and tested using an Android-based smartphone application. It was found that the system introduces a little overhead in terms of the delay required for integrity verification and content validation.", "title": "" }, { "docid": "a3185ee0a3c4ad9a15b52233f46b5e1a", "text": "Automatic fusion of aerial optical imagery and untextured LiDAR data has been of significant interest for generating photo-realistic 3D urban models in recent years. However, unsupervised, robust registration still remains a challenge. This paper presents a new registration method that does not require priori knowledge such as GPS/INS information. The proposed algorithm is based on feature correspondence between a LiDAR depth map and a depth map from an optical image. Each optical depth map is generated from edge-preserving dense correspondence between the image and another optical image, followed by ground plane estimation and alignment for depth consistency. Our two-pass RANSAC with Maximum Likelihood estimation incorporates 2D-2D and 2D-3D correspondences to yield robust camera pose estimation. Experiments with a LiDAR-optical imagery dataset show promising results, without using initial pose information.", "title": "" }, { "docid": "7ed58e8ec5858bdcb5440123aea57bb1", "text": "The demand for cloud computing is increasing because of the popularity of digital devices and the wide use of the Internet. Among cloud computing services, most consumers use cloud storage services that provide mass storage. This is because these services give them various additional functions as well as storage. It is easy to access cloud storage services using smartphones. With increasing utilization, it is possible for malicious users to abuse cloud storage services. Therefore, a study on digital forensic investigation of cloud storage services is necessary. This paper proposes new procedure for investigating and analyzing the artifacts of all accessible devices, such as Windows, Mac, iPhone, and Android smartphone.", "title": "" }, { "docid": "08aa54980d7664ea6fc57aad1dd0029e", "text": "Visual surveillance of dynamic objects, particularly vehicles on the road, has been, over the past decade, an active research topic in computer vision and intelligent transportation systems communities. In the context of traffic monitoring, important advances have been achieved in environment modeling, vehicle detection, tracking, and behavior analysis. This paper is a survey that addresses particularly the issues related to vehicle monitoring with cameras at road intersections. In fact, the latter has variable architectures and represents a critical area in traffic. Accidents at intersections are extremely dangerous, and most of them are caused by drivers' errors. Several projects have been carried out to enhance the safety of drivers in the special context of intersections. In this paper, we provide an overview of vehicle perception systems at road intersections and representative related data sets. The reader is then given an introductory overview of general vision-based vehicle monitoring approaches. Subsequently and above all, we present a review of studies related to vehicle detection and tracking in intersection-like scenarios. Regarding intersection monitoring, we distinguish and compare roadside (pole-mounted, stationary) and in-vehicle (mobile platforms) systems. Then, we focus on camera-based roadside monitoring systems, with special attention to omnidirectional setups. Finally, we present possible research directions that are likely to improve the performance of vehicle detection and tracking at intersections.", "title": "" }, { "docid": "f7bb972cc08d290661bd1f53c4f505f4", "text": "BACKGROUND\nOpen-source clinical natural-language-processing (NLP) systems have lowered the barrier to the development of effective clinical document classification systems. Clinical natural-language-processing systems annotate the syntax and semantics of clinical text; however, feature extraction and representation for document classification pose technical challenges.\n\n\nMETHODS\nThe authors developed extensions to the clinical Text Analysis and Knowledge Extraction System (cTAKES) that simplify feature extraction, experimentation with various feature representations, and the development of both rule and machine-learning based document classifiers. The authors describe and evaluate their system, the Yale cTAKES Extensions (YTEX), on the classification of radiology reports that contain findings suggestive of hepatic decompensation.\n\n\nRESULTS AND DISCUSSION\nThe F(1)-Score of the system for the retrieval of abdominal radiology reports was 96%, and was 79%, 91%, and 95% for the presence of liver masses, ascites, and varices, respectively. The authors released YTEX as open source, available at http://code.google.com/p/ytex.", "title": "" }, { "docid": "e612999b851a75249eeb83a8ab19b78d", "text": "Endophytes are the microorganisms that exist inside the plant tissues without having any negative impact on the host plant. Medicinal plants constitute the huge diversity of endophytic actinobacteria of economical importance. These microbes have huge potential to synthesis of numerous novel compounds that can be exploited in pharmaceutical, agricultural and other industries. It is of prime importance to focus the present research on practical utilization of this microbial group in order to find out the solutions to the problems related to health, environment and agriculture. An extensive characterization of diverse population of endophytic actinobacteria associated with medicinal plants can provide a greater insight into the plant-endophyte interactions and evolution of mutualism. In the present review, we have discussed the diversity of endophytic actinobacteria of from medicinal plants their multiple bioactivities.", "title": "" }, { "docid": "72607f5a6371e1d3e390c93bd0dff25b", "text": "In this paper we present ASPOGAMO, a vision system capable of estimating motion trajectories of soccer players taped on video. The system performs well in a multitude of application scenarios because of its adaptivity to various camera setups, such as single or multiple camera settings, static or dynamic ones. Furthermore, ASPOGAMO can directly process image streams taken from TV broadcast, and extract all valuable information despite scene interruptions and cuts between different cameras. The system achieves a high level of robustness through the use of modelbased vision algorithms for camera estimation and player recognition and a probabilistic multi-player tracking framework capable of dealing with occlusion situations typical in team-sports. The continuous interplay between these submodules is adding to both the reliability and the efficiency of the overall system.", "title": "" }, { "docid": "f8093849e9157475149d00782c60ae60", "text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.", "title": "" }, { "docid": "940e7dc630b7dcbe097ade7abb2883a4", "text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.", "title": "" }, { "docid": "3f7207df2fe2ee320dd268311051d511", "text": "In this article, we study the impact of such eye-hand visibility mismatch on selection tasks performed with hand-rooted pointing techniques. We propose a new mapping for ray control, called Ray Casting from the Eye (RCE), which attempts to overcome this mismatch's negative effects. In essence, RCE combines the benefits of image-plane techniques (the absence of visibility mismatch and continuity of the ray movement in screen space) with the benefits of ray control through hand rotation (requiring less physical hand movement). This article builds on a previous study on the impact of eye-to-hand separation on 3D pointing selection. Here, we provide empirical evidence that RCE clearly outperforms classic ray casting (RC) selection, both in sparse and cluttered scenes.", "title": "" }, { "docid": "df69a701bca12d3163857a9932ef51e2", "text": "Students often have their own individual laptop computers in university classes, and researchers debate the potential benefits and drawbacks of laptop use. In the presented research, we used a combination of surveys and in-class observations to study how students use their laptops in an unmonitored and unrestricted class setting—a large lecture-based university class with nearly 3000 enrolled students. By analyzing computer use over the duration of long (165 minute) classes, we demonstrate how computer use changes over time. The observations and studentreports provided similar descriptions of laptop activities. Note taking was the most common use for the computers, followed by the use of social media web sites. Overall, the data show that students engaged in off-task computer activities for nearly two-thirds of the time. An analysis of the frequency of the various laptop activities over time showed that engagement in individual activities varied significantly over the duration of the class.", "title": "" }, { "docid": "7d7b79b7651ee1f4165ff9e49fa473d0", "text": "I Skip evaluating gradient at exponentially-increasing interval if it remains at zero. Algorithm 2 Heuristic for skipping evaluations of fi at x if ski = 0 then compute f ′ i (x). if f ′ i (x) = 0 then psi = psi + 1. {Update the number of consecutive times f ′ i (x) was zero.} ski = 2 max{0,psi−2}. {Skip exponential number of future evaluations if it remains zero.} else psi = 0. {This could be a support vector, do not skip it next time.} end if return f ′ i (x). else ski = ski − 1. {In this case, we skip the evaluation.} return 0. end if", "title": "" }, { "docid": "c9f2fd6bdcca5e55c5c895f65768e533", "text": "We implemented live-textured geometry model creation with immediate coverage feedback visualizations in AR on the Microsoft HoloLens. A user walking and looking around a physical space can create a textured model of the space, ready for remote exploration and AR collaboration. Out of the box, a HoloLens builds a triangle mesh of the environment while scanning and being tracked in a new environment. The mesh contains vertices, triangles, and normals, but not color. We take the video stream from the color camera and use it to color a UV texture to be mapped to the mesh. Due to the limited graphics memory of the HoloLens, we use a fixed-size texture. Since the mesh generation dynamically changes in real time, we use an adaptive mapping scheme that evenly distributes every triangle of the dynamic mesh onto the fixed-size texture and adapts to new geometry without compromising existing color data. Occlusion is also considered. The user can walk around their environment and continuously fill in the texture while growing the mesh in real-time. We describe our texture generation algorithm and illustrate benefits and limitations of our system with example modeling sessions. Having first-person immediate AR feedback on the quality of modeled physical infrastructure, both in terms of mesh resolution and texture quality, helps the creation of high-quality colored meshes with this standalone wireless device and a fixed memory footprint in real-time.", "title": "" }, { "docid": "124cc672103959685cdcb3e98ae33d93", "text": "With the rise of social media and advancements in AI technology, human-bot interaction will soon be commonplace. In this paper we explore human-bot interaction in STACK OVERFLOW, a question and answer website for developers. For this purpose, we built a bot emulating an ordinary user answering questions concerning the resolution of git error messages. In a first run this bot impersonated a human, while in a second run the same bot revealed its machine identity. Despite being functionally identical, the two bot variants elicited quite different reactions.", "title": "" } ]
scidocsrr
e0760f6dc1569a697bd120cadc8531b0
Implementation and Experimental Results of Superposition Coding on Software Radio
[ { "docid": "0d8b2997f10319da3d59ec35731c8e85", "text": "In this paper, we study the performance of the IEEE 802.11 MAC protocol under a range of jammers that covers both channel-oblivious and channel-aware jamming. We study two channel-oblivious jammers: a periodic jammer that jams deterministically at a specified rate, and a memoryless jammer whose signals arrive according to a Poisson process. We also develop new models for channel-aware jamming, including a reactive jammer that only jams non-colliding transmissions and an omniscient jammer that optimally adjusts its strategy according to current states of the participating nodes. Our study comprises of a theoretical analysis of the saturation throughput of 802.11 under jamming, an extensive simulation study, and a testbed to conduct real world experimentation of jamming IEEE 802.11 using GNU Radio and USRP platform. In our theoretical analysis, we use a discrete-time Markov chain analysis to derive formulae for the saturation throughput of IEEE 802.11 under memoryless, reactive and omniscient jamming. One of our key results is a characterization of optimal omniscient jamming that establishes a lower bound on the saturation throughput of 802.11 under arbitrary jammer attacks. We validate the theoretical analysis by means of Qualnet simulations. Finally, we measure the real-world performance of periodic and memoryless jammers using our GNU radio jammer prototype.", "title": "" } ]
[ { "docid": "69d06dddcc9aa263639d4c7f066c461d", "text": "Mixedand same-sex dyads were observed to examine effects of gender composition on language and of language on gender differences in influence. Ss discussed a topic on which they disagreed. Women were more tentative than men, but only in mixed-sex dyads. Women who spoke tentatively were more influential with men and less influential with women. Language had no effect on how influential men were. In a second study, 120 Ss listened to an audiotape of identical persuasive messages presented either by a man or a woman, half of whom spoke tentatively. Female speakers who spoke tentatively were more influential with male Ss and less influential with female Ss than those who spoke assertively. Male speakers were equally influential in each condition.", "title": "" }, { "docid": "62ca2853492b017a052b9bf5e9b955ff", "text": "This paper describes our attempt to build a sentiment analysis system for Indonesian tweets. With this system, we can study and identify sentiments and opinions in a text or document computationally. We used four thousand manually labeled tweets collected in February and March 2016 to build the model. Because of the variety of content in tweets, we analyze tweets into eight groups in total, including pos(itive), neg(ative), and neu(tral). Finally, we obtained 73.2% accuracy with Long Short Term Memory (LSTM) without normalizer.", "title": "" }, { "docid": "c64dd1051c5b6892df08813e38285843", "text": "Diabetes has emerged as a major healthcare problem in India. Today Approximately 8.3 % of global adult population is suffering from Diabetes. India is one of the most diabetic populated country in the world. Today the technologies available in the market are invasive methods. Since invasive methods cause pain, time consuming, expensive and there is a potential risk of infectious diseases like Hepatitis & HIV spreading and continuous monitoring is therefore not possible. Now a days there is a tremendous increase in the use of electrical and electronic equipment in the medical field for clinical and research purposes. Thus biomedical equipment’s have a greater role in solving medical problems and enhance quality of life. Hence there is a great demand to have a reliable, instantaneous, cost effective and comfortable measurement system for the detection of blood glucose concentration. Non-invasive blood glucose measurement device is one such which can be used for continuous monitoring of glucose levels in human body.", "title": "" }, { "docid": "4cf6a69833d7e553f0818aa72c99c938", "text": "Work on the semantics of questions has argued that the relation between a question and its answer(s) can be cast in terms of logical entailment. In this paper, we demonstrate how computational systems designed to recognize textual entailment can be used to enhance the accuracy of current open-domain automatic question answering (Q/A) systems. In our experiments, we show that when textual entailment information is used to either filter or rank answers returned by a Q/A system, accuracy can be increased by as much as 20% overall.", "title": "" }, { "docid": "a39c9399742571ca389813ffb7e7657e", "text": "Developed agriculture needs to find new ways to improve efficiency. One approach is to utilise available information technologies in the form of more intelligent machines to reduce and target energy inputs in more effective ways than in the past. Precision Farming has shown benefits of this approach but we can now move towards a new generation of equipment. The advent of autonomous system architectures gives us the opportunity to develop a complete new range of agricultural equipment based on small smart machines that can do the right thing, in the right place, at the right time in the right way.", "title": "" }, { "docid": "274c630f1ff8af8ac22b3ebb67e266ea", "text": "There has been a long debate about the predominant involvement of the different adenosine receptor subtypes and the preferential role of pre- versus post-synaptic mechanisms in the psychostimulant effects of the adenosine receptor antagonist caffeine. Both striatal A(1) and A(2A) receptors are involved in the motor-activating and probably reinforcing effects of caffeine, although they play a different role under conditions of acute or chronic caffeine administration. The present review emphasizes the key integrative role of adenosine and adenosine receptor heteromers in the computation of information at the level of the striatal spine module (SSM). This local module is mostly represented by the dendritic spine of the medium spiny neuron with its glutamatergic and dopaminergic synapses and astroglial processes that wrap the glutamatergic synapse. In the SSM, adenosine acts both pre- and post-synaptically through multiple mechanisms, which depend on heteromerization of A(1) and A(2A) receptors among themselves and with D(1) and D(2) receptors, respectively. A critical aspect of the mechanisms of the psychostimulant effects of caffeine is its ability to release the pre- and post-synaptic brakes that adenosine imposes on dopaminergic neurotransmission by acting on different adenosine receptor heteromers localized in different elements of the SSM.", "title": "" }, { "docid": "7df7377675ac0dfda5bcd22f2f5ba22b", "text": "Background and Aim. Esthetic concerns in primary teeth have been studied mainly from the point of view of parents. The aim of this study was to study compare the opinions of children aged 5-8 years to have an opinion regarding the changes in appearance of their teeth due to dental caries and the materials used to restore those teeth. Methodology. A total of 107 children and both of their parents (n = 321), who were seeking dental treatment, were included in this study. A tool comprising a questionnaire and pictures of carious lesions and their treatment arranged in the form of a presentation was validated and tested on 20 children and their parents. The validated tool was then tested on all participants. Results. Children had acceptable validity statistics for the tool suggesting that they were able to make informed decisions regarding esthetic restorations. There was no difference between the responses of the children and their parents on most points. Zirconia crowns appeared to be the most acceptable full coverage restoration for primary anterior teeth among both children and their parents. Conclusion. Within the limitations of the study it can be concluded that children in their sixth year of life are capable of appreciating the esthetics of the restorations for their anterior teeth.", "title": "" }, { "docid": "3afa057464635a4d78d46461562390ea", "text": "Digital librarians strive to add value to the collections they create and maintain. One way is through selectivity: a carefully chosen set of authoritative documents in a particular topic area is far more useful to those working in the area than a huge, unfocused collection (like the Web). Another is by augmenting the collection with highquality metadata, which supports activities of searching and browsing in a uniform and useful way. A third way, and our topic here, is to enrich the documents by examining their content, extracting information, and using it to enhance the ways they can be located and presented. Text mining is a burgeoning new field that attempts to glean meaningful information from natural-language text. It may be loosely characterized as the process of analyzing text to extract information that is useful for particular purposes. It most commonly targets text whose function is the communication of factual information or opinions, and the motivation for trying to extract information from such text automatically is compelling – even if success is only partial. “Text mining” (sometimes called “text data mining”; [4]) defies tight definition but encompasses a wide range of activities: text summarization; document retrieval; document clustering; text categorization; language identification; authorship ascription; identifying phrases, phrase structures, and key phrases; extracting “entities” such as names, dates, and abbreviations; locating acronyms and their definitions; filling predefined templates with extracted information; and even learning rules from such templates [8]. Techniques of text mining have much to offer digital libraries and their users. Here we describe the marriage of a widely used digital library system (Greenstone) with a development environment for text mining (GATE) to enrich the library reader’s experience. The work is in progress: one level of integration has been demonstrated and another is planned. The project has been greatly facilitated by the fact that both systems are publicly available under the GNU public license – and, in addition, this means that the benefits gained by leveraging text mining techniques will accrue to all Greenstone users.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "4d2f03a786f8addf0825b5bc7701c621", "text": "Integrated Design of Agile Missile Guidance and Autopilot Systems By P. K. Menon and E. J. Ohlmeyer Abstract Recent threat assessments by the US Navy have indicated the need for improving the accuracy of defensive missiles. This objective can only be achieved by enhancing the performance of the missile subsystems and by finding methods to exploit the synergism existing between subsystems. As a first step towards the development of integrated design methodologies, this paper develops a technique for integrated design of missile guidance and autopilot systems. Traditional approach for the design of guidance and autopilot systems has been to design these subsystems separately and then to integrate them together before verifying their performance. Such an approach does not exploit any beneficial relationships between these and other subsystems. The application of the feedback linearization technique for integrated guidance-autopilot system design is discussed. Numerical results using a six degree-of-freedom missile simulation are given. Integrated guidance-autopilot systems are expected to result in significant improvements in missile performance, leading to lower weight and enhanced lethality. Both of these factors will lead to a more effective, lower-cost weapon system. Integrated system design methods developed under the present research effort also have extensive applications in high performance aircraft autopilot and guidance systems.", "title": "" }, { "docid": "b6e6963d4e7122dd2d852b2300e50687", "text": "User analysis is a crucial aspect of user-centered systems design, yet Human-Computer Interaction (HCI) has yet to formulate reliable and valid characterizations of users beyond gross distinctions based on task and experience. Individual differences research from mainstream psychology has identified a stable set of characteristics that would appear to offer potential application in the HCI arena. Furthermore, in its evolution over the last 100 years, research on individual differences has faced many of the problems of theoretical status and applicability that is common to HCI. In the present paper the relationship between work in cognitive and differential psychology and current analyses of users in HCI is examined. It is concluded that HCI could gain significant predictive power if individual differences research was related to the analysis of users in contemporary systems design.", "title": "" }, { "docid": "45484e263769ada08d6af03e32f079fe", "text": "In this paper, a triple-band monopole antenna for WLAN and WiMAX wireless communication applications is presented. The antenna has a simple structure designed for 2.4/5.2/5.8 GHz WLAN and 3.5/5.5 GHz WiMAX bands. The radiator is composed of just two branches and a short stub. The antenna is designed on a 40 × 40 × 0.8 mm3 substrate using computer simulation. For verification of simulation results, a prototype is fabricated and measured. Results show that the antenna can provide three impedance bandwidths, 2.35-2.58 GHz, 3.25-4 GHz and 4.95-5.9 GHz, for the WLAN and WiMAX applications. The simulated and measured radiation patterns, efficiencies and gains of the antenna are all presented.", "title": "" }, { "docid": "0ac9ad839f21bd03342dd786b09155fe", "text": "Graphs are fundamental data structures which concisely capture the relational structure in many important real-world domains, such as knowledge graphs, physical and social interactions, language, and chemistry. Here we introduce a powerful new approach for learning generative models over graphs, which can capture both their structure and attributes. Our approach uses graph neural networks to express probabilistic dependencies among a graph’s nodes and edges, and can, in principle, learn distributions over any arbitrary graph. In a series of experiments our results show that once trained, our models can generate good quality samples of both synthetic graphs as well as real molecular graphs, both unconditionally and conditioned on data. Compared to baselines that do not use graph-structured representations, our models often perform far better. We also explore key challenges of learning generative models of graphs, such as how to handle symmetries and ordering of elements during the graph generation process, and offer possible solutions. Our work is the first and most general approach for learning generative models over arbitrary graphs, and opens new directions for moving away from restrictions of vectorand sequence-like knowledge representations, toward more expressive and flexible relational data structures.", "title": "" }, { "docid": "8be48759b1ae6b7d65ff61ebc43dfee6", "text": "In this study, we introduce a household object dataset for recognition and manipulation tasks, focusing on commonly available objects in order to facilitate sharing of applications and algorithms. The core information available for each object consists of a 3D surface model annotated with a large set of possible grasp points, pre-computed using a grasp simulator. The dataset is an integral part of a complete Robot Operating System (ROS) architecture for performing pick and place tasks. We present our current applications using this data, and discuss possible extensions and future directions for shared datasets for robot operation in unstructured settings. I. DATASETS FOR ROBOTICS RESEARCH Recent years have seen a growing consensus that one of the keys to robotic applications in unstructured environments lies in collaboration and reusable functionality. An immediate result has been the emergence of a number of platforms and frameworks for sharing operational “building blocks,” usually in the form of code modules, with functionality ranging from low-level hardware drivers to complex algorithms such as path or motion planners. By using a set of now well-established guidelines, such as stable documented interfaces and standardized communication protocols, this type of collaboration has accelerated development towards complex applications. However, a similar set of methods for sharing and reusing data has been slower to emerge. In this paper we describe our effort in producing and releasing to the community a complete architecture for performing pick-and-place tasks in unstructured (or semistructured) environments. There are two key components to this architecture: the algorithms themselves, developed using the Robot Operating System (ROS) framework, and the knowledge base that they operate on. In our case, the algorithms provide abilities such as object segmentation and recognition, motion planning with collision avoidance, grasp execution using tactile feedback, etc. The knowledge base, which is the main focus of this study, contains relevant information for object recognition and grasping for a large set of common household objects. Some of the key aspects of combining computational tools with the data that they operate on are: • other researchers will have the option of directly using our dataset over the Internet (in an open, read-only fashion), or downloading and customizing it for their own applications; • defining a stable interface to the dataset component of the release will allow other researchers to provide their own modified and/or extended versions of the data to †Willow Garage Inc., Menlo Park, CA. Email: {matei, bradski, hsiao, pbrook}@willowgarage.com ∗University of Washington, Seattle, WA. the community, knowing that it will be directly usable by anyone running the algorithmic component; • the data and algorithm components can evolve together, like any other components of a large software distribution, with well-defined and documented interfaces, version numbering and control, etc. In particular, our current dataset is available in the form of a relational database, using the SQL standard. This choice provides additional benefits, including optimized relational queries, both for using the data on-line and managing it off-line, and low-level serialization functionality for most major languages. We believe that these features can help foster collaboration as well as provide useful tools for benchmarking as we advance towards increasingly complex behavior in unstructured environments. There have been previous example of datasets released in the research community (as described for example in [3], [7], [13] to name only a few), used either for benchmarking or for data-driven algorithms. However, few of these have been accompanied by the relevant algorithms, or have offered a well-defined interface to be used for extensions. The database component of our architecture was directly inspired by the Columbia Grasp Database (CGDB) [5], [6], released together with processing software integrated with the GraspIt! simulator [9]. The CGDB contains object shape and grasp information for a very large (n = 7, 256) set of general shapes from the Princeton Shape Benchmark [12]. The dataset presented here is smaller in scope (n = 180), referring only to actual graspable objects from the real world, and is integrated with a complete manipulation pipeline on the PR2 robot. II. THE OBJECT AND GRASP DATABASE", "title": "" }, { "docid": "70f35b19ba583de3b9942d88c94b9148", "text": "ARCHEOGUIDE (Augmented Reality-based Cultural Heritage On-site GUIDE) is an IST project, funded by the EU, aiming at providing a personalized Virtual Reality guide and tour assistant to archaeological site visitors and a multimedia repository and information system for archaeologists and site curators. The system provides monument reconstructions, ancient life simulation, and database tools for creating and archiving archaeological multimedia material.", "title": "" }, { "docid": "8087fe4979dc5c056decd31e7c1e6ee1", "text": "With over 100 million users, Duolingo is the most popular education app in the world in Android and iOS. In the first part of this talk, we will describe the motivation for creating Duolingo, its philosophy, and some of the basic techniques used to successfully teach languages and keep users engaged. The second part will focus on the machine learning and natural language processing algorithms we use to model student learning. Proceedings of the 8th International Conference on Educational Data Mining 3 Proceedings of the 8th International Conference on Educational Data Mining 4 Personal Knowledge/Learning Graph George Siemens University of Texas Arlington and Athabasca University gsiemens@gmail.com Ryan Baker Teachers College Columbia University baker2@exchange. tc.columbia.edu Dragan Gasevic Schools of Education and Informatics University of Edinburgh dragan.gasevic@ed.ac.uk", "title": "" }, { "docid": "339f7a0031680a2d930f143700d66d5e", "text": "We propose an approach to generate natural language questions from knowledge graphs such as DBpedia and YAGO. We stage this in the setting of a quiz game. Our approach, though, is general enough to be applicable in other settings. Given a topic of interest (e.g., Soccer) and a difficulty (e.g., hard), our approach selects a query answer, generates a SPARQL query having the answer as its sole result, before verbalizing the question.", "title": "" }, { "docid": "1de2d4e5b74461c142e054ffd2e62c2d", "text": "Table : Comparisons of CNN, LSTM and SWEM architectures. Columns correspond to the number of compositional parameters, computational complexity and sequential operations, respectively. v Consider a text sequence represented as X, composed of a sequence of words. Let {v#, v$, ...., v%} denote the respective word embeddings for each token, where L is the sentence/document length; v The compositional function, X → z, aims to combine word embeddings into a fixed-length sentence/document representation z. Typically, LSTM or CNN are employed for this purpose;", "title": "" }, { "docid": "994a674367471efecd38ac22b3c209fc", "text": "In vehicular ad hoc networks (VANETs), trust establishment among vehicles is important to secure integrity and reliability of applications. In general, trust and reliability help vehicles to collect correct and credible information from surrounding vehicles. On top of that, a secure trust model can deal with uncertainties and risk taking from unreliable information in vehicular environments. However, inaccurate, incomplete, and imprecise information collected by vehicles as well as movable/immovable obstacles have interrupting effects on VANET. In this paper, a fuzzy trust model based on experience and plausibility is proposed to secure the vehicular network. The proposed trust model executes a series of security checks to ensure the correctness of the information received from authorized vehicles. Moreover, fog nodes are adopted as a facility to evaluate the level of accuracy of event’s location. The analyses show that the proposed solution not only detects malicious attackers and faulty nodes, but also overcomes the uncertainty and imprecision of data in vehicular networks in both line of sight and non-line of sight environments.", "title": "" }, { "docid": "e103d3a7be2ce1933eac191d2324e85b", "text": "With recent progress in the medical signals processing, the EEG allows to study the Brain functioning with a high temporal and spatial resolution. This approach is possible by combining the standard processing algorithms of cortical brain waves with characterization and interpolation methods. First, a new vector of characteristics for each EEG channel was introduced using the Extended Kalman filter (EKF). Next, the spherical spline interpolation technique was applied in order to rebuild other vectors corresponding to virtual electrodes. The temporal variation of these vectors was restored by applying the EKF. Finally, the accuracy of the method has been estimated by calculating the error between the actual and interpolated signal after passing by the characterization method with the Root Mean Square Error algorithm (RMSE).", "title": "" } ]
scidocsrr
81d2122abcbd7aed4d788fe2d4778b7c
The Internet of Things: Vision & challenges
[ { "docid": "fa3c52e9b3c4a361fd869977ba61c7bf", "text": "The combination of the Internet and emerging technologies such as nearfield communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application.", "title": "" } ]
[ { "docid": "4a741431c708cd92a250bcb91e4f1638", "text": "PURPOSE\nIn today's workplace, nurses are highly skilled professionals possessing expertise in both information technology and nursing. Nursing informatics competencies are recognized as an important capability of nurses. No established guidelines existed for nurses in Asia. This study focused on identifying the nursing informatics competencies required of nurses in Taiwan.\n\n\nMETHODS\nA modified Web-based Delphi method was used for two expert groups in nursing, educators and administrators. Experts responded to 323 items on the Nursing Informatics Competencies Questionnaire, modified from the initial work of Staggers, Gassert and Curran to include 45 additional items. Three Web-based Delphi rounds were conducted. Analysis included detailed item analysis. Competencies that met 60% or greater agreement of item importance and appropriate level of nursing practice were included.\n\n\nRESULTS\nN=32 experts agreed to participate in Round 1, 23 nursing educators and 9 administrators. The participation rates for Rounds 2 and 3=68.8%. By Round 3, 318 of 323 nursing informatics competencies achieved required consensus levels. Of the new competencies, 42 of 45 were validated. A high degree of agreement existed for specific nursing informatics competencies required for nurses in Taiwan (97.8%).\n\n\nCONCLUSIONS\nThis study provides a current master list of nursing informatics competency requirements for nurses at four levels in the U.S. and Taiwan. The results are very similar to the original work of Staggers et al. The results have international relevance because of the global importance of information technology for the nursing profession.", "title": "" }, { "docid": "ff6c60d341ba05daa38a2f173eb03b19", "text": "Despite the importance of online product recommendations (OPR) in e-Commerce transactions, there is still very little understanding about how different recommendation sources affect consumers' beliefs and behavior, and whether these effects are additive, complementary or rivals for different types of products. This study investigates the differential effects of provider recommendations (PR) and consumer reviews (CR) on the instrumental, affective and trusting dimensions of consumer beliefs, and show how these beliefs ultimately influence continued OPR usage and product purchase intentions. This study tests a conceptual model linking PR and CR to four consumer beliefs (perceived usefulness, perceived ease of use, perceived affective quality, and trust) in two different product settings (search products vs. experience products). Results of an experimental study (N = 396) show that users of PR express significantly higher perceived usefulness and perceived ease of use than users of CR, while users of CR express higher trusting beliefs and perceived affective quality than users of PR, resulting in different effect mechanisms towards OPR reuse and purchase intentions in e-Commerce transactions. Further, CR were found to elicit higher perceived usefulness, trusting beliefs and perceived affective quality for experience goods, while PR were found to unfold higher effects on all of these variables for search goods.", "title": "" }, { "docid": "a9927b6e914d3f97318969e24fc151a2", "text": "This paper presents a cluster-based TDMA (CBT) system for inter-vehicle communications. In intra-cluster communications, the proposed CBT uses a simple transmitand-listen scheme to fast elect a VC (VANET Coordinator) and it allows a VN (VANET node) to randomly choose a time slot for bandwidth requests (BR) without limiting the number of VNs. In inter-cluster communications, when two clusters are approaching, the CBT can quickly resolve the collisions by re-allocating time slots in one of the clusters. To analyze the performance of the proposed CBT, we derive mathematical equations using probability. The performance metrics of our interests include the average number of time slots for electing a VC, the average number of time slots required for BR, and the total number of time slots required before data can be successfully transmitted. The analytical results are finally validated by a simulation. Both the analytical and simulation results show that the proposed CBT spends less time to form a small-sized cluster than IEEE 802.11p. Additionally, when the number of joining VNs is increased, CBT takes less waiting time before a VN can effectively transmit data.", "title": "" }, { "docid": "a3e36252f25a9fe6f46c729fb8a2f157", "text": "Although significant advances have been made in the area of human poses estimation from images using deep Convolutional Neural Network (ConvNet), it remains a big challenge to perform 3D pose inference in-the-wild. This is due to the difficulty to obtain 3D pose groundtruth for outdoor environments. In this paper, we propose a novel framework to tackle this problem by exploiting the information of each bone indicating if it is forward or backward with respect to the view of the camera(we term it Forwardor-Backward Information abbreviated as FBI). Our method firstly trains a ConvNet with two branches which maps an image of a human to both the 2D joint locations and the FBI of bones. These information is further fed into a deep regression network to predict the 3D positions of joints. To support the training, we also develop an annotation user interface and labeled such FBI for around 12K in-the-wild images which are randomly selected from MPII (a public dataset of 2D pose annotation). Our experimental results on the standard benchmarks demonstrate that our approach outperforms state-of-the-art methods both qualitatively and quantitatively.", "title": "" }, { "docid": "493ad96590ee91fdfd68a4e59492dc55", "text": "The 21st century will see a renewed focus on intermodal freight transportation driven by the changing requirements of global supply chains. Each of the transportation modes (air, inland water, ocean, pipeline, rail, and road) has gone through technological evolution and has functioned separately under a modally based regulatory structure for most of the 20th century. With the development of containerization in the mid-1900s, the reorientation toward deregulation near the end of the century, and a new focus on logistics and global supply chain requirements, the stage is set for continued intermodal transportation growth. The growth of intermodal freight transportation will be driven and challenged by four factors: (a) measuring, understanding, and responding to the role of intermodalism in the changing customer requirements and hypercompetition of supply chains in a global marketplace; (b) the need to reliably and flexibly respond to changing customer requirements with seamless and integrated coordination of freight and equipment flows through various modes; (c) knowledge of current and future intermodal operational options and alternatives, as well as the potential for improved information and communications technology and the challenges associated with their application; and (d) constraints on and coordination of infrastructure capacity, including policy and regulatory issues, as well as better management of existing infrastructure and broader considerations on future investment in new infrastructure.", "title": "" }, { "docid": "a2a0ff72b88d766ab5eb087c14d88b03", "text": "Next-generation non-volatile memory (NVM) technologies, such as phase-change memory and memristors, can enable computer systems infrastructure to continue keeping up with the voracious appetite of data-centric applications for large, cheap, and fast storage. Persistent memory has emerged as a promising approach to accessing emerging byte-addressable non-volatile memory through processor load/store instructions. Due to lack of commercially available NVM, system software researchers have mainly relied on emulation to model persistent memory performance. However, existing emulation approaches are either too simplistic, or too slow to emulate large-scale workloads, or require special hardware. To fill this gap and encourage wider adoption of persistent memory, we developed a performance emulator for persistent memory, called Quartz. Quartz enables an efficient emulation of a wide range of NVM latencies and bandwidth characteristics for performance evaluation of emerging byte-addressable NVMs and their impact on applications performance (without modifying or instrumenting their source code) by leveraging features available in commodity hardware. Our emulator is implemented on three latest Intel Xeon-based processor architectures: Sandy Bridge, Ivy Bridge, and Haswell. To assist researchers and engineers in evaluating design decisions with emerging NVMs, we extend Quartz for emulating the application execution on future systems with two types of memory: fast, regular volatile DRAM and slower persistent memory. We evaluate the effectiveness of our approach by using a set of specially designed memory-intensive benchmarks and real applications. The accuracy of the proposed approach is validated by running these programs both on our emulation platform and a multisocket (NUMA) machine that can support a range of memory latencies. We show that Quartz can emulate a range of performance characteristics with low overhead and good accuracy (with emulation errors 0.2% - 9%).", "title": "" }, { "docid": "fa894900871faf8cd86c1dee5fad57f7", "text": "Epigastric herniation is a rather common condition with a reported prevalence up to 10 %. Only a minority is symptomatic, presumably the reason for the scarce literature on this subject. Epigastric hernias have specific characteristics for which several anatomical theories have been developed. Whether these descriptions of pathological mechanisms still hold with regard to the characteristics of epigastric hernia is the subject of this review. A multi-database research was performed to reveal relevant literature by free text word and subject headings ‘epigastric hernia’, ‘linea alba’, ‘midline’ and ‘abdominal wall’. Reviewed were studies on anatomical theories describing the pathological mechanism of epigastric herniation, incidence, prevalence and female-to-male ratio and possible explanatory factors. Three different theories have been described of which two have not been confirmed by other studies. The attachment of the diaphragm causing extra tension in the epigastric region is the one still standing. Around 1.6–3.6 % of all abdominal hernias and 0.5–5 % of all operated abdominal hernias is an epigastric hernia. Epigastric hernias are 2–3 times more common in men, with a higher incidence in patients from 20 to 50 years. Some cadaver studies show an epigastric hernia rate of 0.5–10 %. These specific features of the epigastric hernias (the large asymptomatic proportion, male predominance, only above umbilical level) are discussed with regard to the general theories. The epigastric hernia is a very common condition, mostly asymptomatic. Together with general factors for hernia formation, the theory of extra tension in the epigastric region by the diaphragm is the most likely theory of epigastric hernia formation.", "title": "" }, { "docid": "680be905a0f01e26e608ba7b4b79a94e", "text": "A cost-effective position measurement system based on optical mouse sensors is presented in this work. The system is intended to be used in a planar positioning stage for microscopy applications and as such, has strict resolution, accuracy, repeatability, and sensitivity requirements. Three techniques which improve the measurement system's performance in the context of these requirements are proposed; namely, an optical magnification of the image projected onto the mouse sensor, a periodic homing procedure to reset the error buildup, and a compensation of the undesired dynamics caused by filters implemented in the mouse sensor chip.", "title": "" }, { "docid": "2c8c8511e1391d300bfd4b0abd5ecea4", "text": "In 2009, we reported on a new Intelligent Tutoring Systems (ITS) technology, example-tracing tutors, that can be built without programming using the Cognitive Tutor Authoring Tools (CTAT). Creating example-tracing tutors was shown to be 4–8 times as cost-effective as estimates for ITS development from the literature. Since 2009, CTAT and its associated learning management system, the Tutorshop, have been extended and have been used for both research and real-world instruction. As evidence that example-tracing tutors are an effective and mature ITS paradigm, CTAT-built tutors have been used by approximately 44,000 students and account for 40 % of the data sets in DataShop, a large open repository for educational technology data sets. We review 18 example-tracing tutors built since 2009, which have been shown to be effective in helping students learn in real educational settings, often with large pre/post effect sizes. These tutors support a variety of pedagogical approaches, beyond step-based problem solving, including collaborative learning, educational games, and guided invention activities. CTAT and other ITS authoring tools illustrate that non-programmer approaches to building ITS are viable and useful and will likely play a key role in making ITS widespread.", "title": "" }, { "docid": "2c6d4bdacab9a4fc4f0ad16f271bbc13", "text": "In low light condition, the signal-to-noise ratio (SNR) is low and thus the captured images are seriously degraded by noise. Since low light images contain much noise in flat and dark regions, contrast enhancement without considering noise characteristics causes serious noise amplification. In this paper, we propose low light image enhancement based on two-step noise suppression. First, we perform noise aware contrast enhancement using noise level function (NLF). NLF is used to get a noise aware histogram which prevents noise amplification, and we use the noise aware histogram in contrast enhancement. However, the increase of intensity by contrast enhancement reduces the visibility threshold, which makes noise visible by human eyes. Second, we utilize a just noticeable difference (JND) model from luminance adaptation to suppress noise based on human visual perception. Experimental results show that the proposed method successfully enhances contrast in low light images while minimizing noise amplification.", "title": "" }, { "docid": "c9582409212e6f9b194175845216b2b6", "text": "Although the amygdala complex is a brain area critical for human behavior, knowledge of its subspecialization is primarily derived from experiments in animals. We here employed methods for large-scale data mining to perform a connectivity-derived parcellation of the human amygdala based on whole-brain coactivation patterns computed for each seed voxel. Voxels within the histologically defined human amygdala were clustered into distinct groups based on their brain-wide coactivation maps. Using this approach, connectivity-based parcellation divided the amygdala into three distinct clusters that are highly consistent with earlier microstructural distinctions. Meta-analytic connectivity modelling then revealed the derived clusters' brain-wide connectivity patterns, while meta-data profiling allowed their functional characterization. These analyses revealed that the amygdala's laterobasal nuclei group was associated with coordinating high-level sensory input, whereas its centromedial nuclei group was linked to mediating attentional, vegetative, and motor responses. The often-neglected superficial nuclei group emerged as particularly sensitive to olfactory and probably social information processing. The results of this model-free approach support the concordance of structural, connectional, and functional organization in the human amygdala and point to the importance of acknowledging the heterogeneity of this region in neuroimaging research.", "title": "" }, { "docid": "fea4f7992ec61eaad35872e3a800559c", "text": "The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Mismatches often occur between the learning styles of students in a language class and the teaching style of the instructor, with unfortunate effects on the quality of the students’ learning and on their attitudes toward the class and the subject. This paper defines several dimensions of learning style thought to be particularly relevant to foreign and second language education, outlines ways in which certain learning styles are favored by the teaching styles of most language instructors, and suggests steps to address the educational needs of all students in foreign language classes. Students learn in many ways—by seeing and hearing; reflecting and acting; reasoning logically and intuitively; memorizing and visualizing. Teaching methods also vary. Some instructors lecture, others demonstrate or discuss; some focus on rules and others on examples; some emphasize memory and others understanding. How much a given student learns in a class is governed in part by that student’s native ability and prior preparation but also by the compatibility of his or her characteristic approach to learning and the instructor’s characteristic approach to teaching. The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Learning styles have been extensively discussed in the educational psychology literature (Claxton & Murrell 1987; Schmeck 1988) and specifically in the context Richard M. Felder (Ph.D., Princeton University) is the Hoechst Celanese Professor of Chemical Engineering at North Carolina State University,", "title": "" }, { "docid": "b39ce00b531dcbf417d0b78c8b9bf1cd", "text": "With the transition of facial expression recognition (FER) from laboratory-controlled to challenging in-the-wild conditions and the recent success of deep learning techniques in various fields, deep neural networks have increasingly been leveraged to learn discriminative representations for automatic FER. Recent deep FER systems generally focus on two important issues: overfitting caused by a lack of sufficient training data and expression-unrelated variations, such as illumination, head pose and identity bias. In this paper, we provide a comprehensive survey on deep FER, including datasets and algorithms that provide insights into these intrinsic problems. First, we introduce the available datasets that are widely used in the literature and provide accepted data selection and evaluation principles for these datasets. We then describe the standard pipeline of a deep FER system with the related background knowledge and suggestions of applicable implementations for each stage. For the state of the art in deep FER, we review existing novel deep neural networks and related training strategies that are designed for FER based on both static images and dynamic image sequences, and discuss their advantages and limitations. Competitive performances on widely used benchmarks are also summarized in this section. We then extend our survey to additional related issues and application scenarios. Finally, we review the remaining challenges and corresponding opportunities in this field as well as future directions for the design of robust deep FER systems.", "title": "" }, { "docid": "d5b20e250e28cae54a7f3c868f342fc5", "text": "Context: Reusing software by means of copy and paste is a frequent activity in software development. The duplicated code is known as a software clone and the activity is known as code cloning. Software clones may lead to bug propagation and serious maintenance problems. Objective: This study reports an extensive systematic literature review of software clones in general and software clone detection in particular. Method: We used the standard systematic literature review method based on a comprehensive set of 213 articles from a total of 2039 articles published in 11 leading journals and 37 premier conferences and", "title": "" }, { "docid": "79ea2c1566b3bb1e27fe715b1a1a385b", "text": "The number of research papers available is growing at a staggering rate. Researchers need tools to help them find the papers they should read among all the papers published each year. In this paper, we present and experiment with hybrid recommender algorithms that combine Collaborative Filtering and Content-based. Filtering to recommend research papers to users. Our hybrid algorithms combine the strengths of each filtering approach to address their individual weaknesses. We evaluated our algorithms through offline experiments on a database of 102, 000 research papers, and through an online experiment with 110 users. For both experiments we used a dataset created from the CiteSeer repository of computer science research papers. We developed separate English and Portuguese versions of the interface and specifically recruited American and Brazilian users to test for cross-cultural effects. Our results show that users value paper recommendations, that the hybrid algorithms can be successfully combined, that different algorithms are more suitable for recommending different kinds of papers, and that users with different levels of experience perceive recommendations differently These results can be applied to develop recommender systems for other types of digital libraries.", "title": "" }, { "docid": "e0fbfac63b894c46e3acda86adb67053", "text": "OBJECTIVE\nTo investigate the effectiveness of acupuncture compared with minimal acupuncture and with no acupuncture in patients with tension-type headache.\n\n\nDESIGN\nThree armed randomised controlled multicentre trial.\n\n\nSETTING\n28 outpatient centres in Germany.\n\n\nPARTICIPANTS\n270 patients (74% women, mean age 43 (SD 13) years) with episodic or chronic tension-type headache.\n\n\nINTERVENTIONS\nAcupuncture, minimal acupuncture (superficial needling at non-acupuncture points), or waiting list control. Acupuncture and minimal acupuncture were administered by specialised physicians and consisted of 12 sessions per patient over eight weeks.\n\n\nMAIN OUTCOME MEASURE\nDifference in numbers of days with headache between the four weeks before randomisation and weeks 9-12 after randomisation, as recorded by participants in headache diaries.\n\n\nRESULTS\nThe number of days with headache decreased by 7.2 (SD 6.5) days in the acupuncture group compared with 6.6 (SD 6.0) days in the minimal acupuncture group and 1.5 (SD 3.7) days in the waiting list group (difference: acupuncture v minimal acupuncture, 0.6 days, 95% confidence interval -1.5 to 2.6 days, P = 0.58; acupuncture v waiting list, 5.7 days, 3.9 to 7.5 days, P < 0.001). The proportion of responders (at least 50% reduction in days with headache) was 46% in the acupuncture group, 35% in the minimal acupuncture group, and 4% in the waiting list group.\n\n\nCONCLUSIONS\nThe acupuncture intervention investigated in this trial was more effective than no treatment but not significantly more effective than minimal acupuncture for the treatment of tension-type headache.\n\n\nTRIAL REGISTRATION NUMBER\nISRCTN9737659.", "title": "" }, { "docid": "5fd6462e402e3a3ab1e390243d80f737", "text": "We present TinyOS, a flexible, application-specific operating system for sensor networks. Sensor networks consist of (potentially) thousands of tiny, low-power nodes, each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. The sensor network challenges of limited resources, event-centric concurrent applications, and low-power operation drive the design of TinyOS. Our solution combines flexible, fine-grain components with an execution model that supports complex yet safe concurrent operations. TinyOS meets these challenges well and has become the platform of choice for sensor network research; it is in use by over a hundred groups worldwide, and supports a broad range of applications and research topics. We provide a qualitative and quantitative evaluation of the system, showing that it supports complex, concurrent programs with very low memory requirements (many applications fit within 16KB of memory, and the core OS is 400 bytes) and efficient, low-power operation. We present our experiences with TinyOS as a platform for sensor network innovation and applications.", "title": "" }, { "docid": "eecb51caad133090d5efe1a693b2441e", "text": "Attracting Diverse Students As indicated in other articles in this issue, a major goal of the CS Principles effort was to attract a population of students that includes many who are not predisposed to study computing. The field not only battles negative stereotypes, but its labor pool must be enlarged both to meet expected demand, but also to introduce more diverse opinions, especially considering the importance of social media. The results are very encouraging.", "title": "" }, { "docid": "24dec0f0943833fb719b580fb3811508", "text": "This paper presents a new approach to authenticate individuals using triangulation of hand vein images and simultaneous extraction of knuckle shape information. The proposed method is fully automated and employs palm dorsal hand vein images acquired from the low-cost, near infrared, contactless imaging. The knuckle tips are used as key points for the image normalization and extraction of region of interest. The matching scores are generated in two parallel stages: (i) hierarchical matching score from the four topologies of triangulation in the binarized vein structures and (ii) from the geometrical features consisting of knuckle point perimeter distances in the acquired images. The weighted score level combination from these two matching scores are used to authenticate the individuals. The achieved experimental results from the proposed system using contactless palm dorsal-hand vein images are promising (equal error rate of 1.14%) and suggest more user friendly alternative for user identification.", "title": "" }, { "docid": "49f0371f84d7874a6ccc6f9dd0779d3b", "text": "Managing customer satisfaction has become a crucial issue in fast-food industry. This study aims at identifying determinant factor related to customer satisfaction in fast-food restaurant. Customer data are analyzed by using data mining method with two classification techniques such as decision tree and neural network. Classification models are developed using decision tree and neural network to determine underlying attributes of customer satisfaction. Generated rules are beneficial for managerial and practical implementation in fast-food industry. Decision tree and neural network yield more than 80% of predictive accuracy.", "title": "" } ]
scidocsrr
56d86a1ac1226e3057572e6f474b500f
A Continuous Occlusion Model for Road Scene Understanding
[ { "docid": "cc4c58f1bd6e5eb49044353b2ecfb317", "text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.", "title": "" } ]
[ { "docid": "75a226495dc4592f4ac52a710c9a2ab5", "text": "For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.", "title": "" }, { "docid": "f83bf92a38f1ce7734a5c1abce65f92f", "text": "This paper presents an Adaptive fuzzy logic PID controller for speed control of Brushless Direct current Motor drives which is widely used in various industrial systems, such as servo motor drives, medical, automobile and aerospace industry. BLDC motors were electronically commutated motor offer many advantages over Brushed DC Motor which includes increased efficiency, longer life, low volume and high torque. This paper presents an overview of performance of fuzzy PID controller and Adaptive fuzzy PID controller using Simulink model. Tuning Parameters and computing using Normal PID controller is difficult and also it does not give satisfied control characteristics when compare to Adaptive Fuzzy PID controller. From the Simulation results we verify that Adaptive Fuzzy PID controller give better control performance when compared to fuzzy PID controller. The software Package SIMULINK was used in control and Modelling of BLDC Motor.", "title": "" }, { "docid": "2fe6167981fd99c30a7e43f07f7b4e2a", "text": "The ability to generate narrative is of importance to computer systems that wish to use story effectively for entertainment, training, or education. One way to generate narrative is to use planning. However, story planners are limited by the fact that they can only operate on the story world provided, which impacts the ability of the planner to find a solution story plan and the quality and structure of the story plan if one is found. We present a planning algorithm for story generation that can nondeterministically make decisions about the description of the initial story world state in a leastcommitment fashion.", "title": "" }, { "docid": "ccedb6cff054254f3427ab0d45017d2a", "text": "Traffic and power generation are the main sources of urban air pollution. The idea that outdoor air pollution can cause exacerbations of pre-existing asthma is supported by an evidence base that has been accumulating for several decades, with several studies suggesting a contribution to new-onset asthma as well. In this Series paper, we discuss the effects of particulate matter (PM), gaseous pollutants (ozone, nitrogen dioxide, and sulphur dioxide), and mixed traffic-related air pollution. We focus on clinical studies, both epidemiological and experimental, published in the previous 5 years. From a mechanistic perspective, air pollutants probably cause oxidative injury to the airways, leading to inflammation, remodelling, and increased risk of sensitisation. Although several pollutants have been linked to new-onset asthma, the strength of the evidence is variable. We also discuss clinical implications, policy issues, and research gaps relevant to air pollution and asthma.", "title": "" }, { "docid": "bc75bfa627f138d6d01dd4b04898147b", "text": "The Adaptive Least Squares Correlation is a very potent and flexible technique for all kinds of data matching problems. Here its application to image matching is outlined. It allows for simultaneous radiometric corrections and local geometrical image shaping, whereby the system parameters are automatically assessed, corrected, and thus optimized during the least squares iterations. The various tools of least squares estimation can be favourably utilized for the assessment of the correlation quality. Furthermore, the system allows for stabilization and improvement of the correlation procedure through the simultaneous consideration of geometrical constraints, e.g. the collinearity condition. Some exciting new perspectives are emphasized, as for example multiphoto correlation, multitemporal and multisensor correlation, multipoint correlation, and simultaneous correlation/triangulation.", "title": "" }, { "docid": "ac9f71a97f6af0718587ffd0ea92d31d", "text": "Modern cyber-physical systems are complex networked computing systems that electronically control physical systems. Autonomous road vehicles are an important and increasingly ubiquitous instance. Unfortunately, their increasing complexity often leads to security vulnerabilities. Network connectivity exposes these vulnerable systems to remote software attacks that can result in real-world physical damage, including vehicle crashes and loss of control authority. We introduce an integrated architecture to provide provable security and safety assurance for cyber-physical systems by ensuring that safety-critical operations and control cannot be unintentionally affected by potentially malicious parts of the system. Finegrained information flow control is used to design both hardware and software, determining how low-integrity information can affect high-integrity control decisions. This security assurance is used to improve end-to-end security across the entire cyber-physical system. We demonstrate this integrated approach by developing a mobile robotic testbed modeling a self-driving system and testing it with a malicious attack. ACM Reference Format: Jed Liu, Joe Corbett-Davies, Andrew Ferraiuolo, Alexander Ivanov, Mulong Luo, G. Edward Suh, Andrew C. Myers, and Mark Campbell. 2018. Secure Autonomous Cyber-Physical Systems Through Verifiable Information Flow Control. InWorkshop on Cyber-Physical Systems Security & Privacy (CPS-SPC ’18), October 19, 2018, Toronto, ON, Canada. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3264888.3264889", "title": "" }, { "docid": "52e492ff5e057a8268fd67eb515514fe", "text": "We present a long-range passive (battery-free) radio frequency identification (RFID) and distributed sensing system using a single wire transmission line (SWTL) as the communication channel. A SWTL exploits guided surface wave propagation along a single conductor, which can be formed from existing infrastructure, such as power lines, pipes, or steel cables. Guided propagation along a SWTL has far lower losses than a comparable over-the-air (OTA) communication link; so much longer read distances can be achieved compared with the conventional OTA RFID system. In a laboratory-scale experiment with an ISO18000–6C (EPC Gen 2) passive tag, we demonstrate an RFID system using an 8 mm diameter, 5.2 m long SWTL. This SWTL has 30 dB lower propagation loss than a standard OTA RFID system at the same read range. We further demonstrate that the SWTL can tolerate extreme temperatures far beyond the capabilities of coaxial cable, by heating an operating SWTL conductor with a propane torch having a temperature of nearly 2000 °C. Extrapolation from the measured results suggest that a SWTL-based RFID system is capable of read ranges of over 70 m assuming a reader output power of +32.5 dBm and a tag power-up threshold of −7 dBm.", "title": "" }, { "docid": "1f13e466fe482f07e8446345ef811685", "text": "Predicting users' actions based on anonymous sessions is a challenging problem in web-based behavioral modeling research, mainly due to the uncertainty of user behavior and the limited information. Recent advances in recurrent neural networks have led to promising approaches to solving this problem, with long short-term memory model proving effective in capturing users' general interests from previous clicks. However, none of the existing approaches explicitly take the effects of users' current actions on their next moves into account. In this study, we argue that a long-term memory model may be insufficient for modeling long sessions that usually contain user interests drift caused by unintended clicks. A novel short-term attention/memory priority model is proposed as a remedy, which is capable of capturing users' general interests from the long-term memory of a session context, whilst taking into account users' current interests from the short-term memory of the last-clicks. The validity and efficacy of the proposed attention mechanism is extensively evaluated on three benchmark data sets from the RecSys Challenge 2015 and CIKM Cup 2016. The numerical results show that our model achieves state-of-the-art performance in all the tests.", "title": "" }, { "docid": "06d30f5d22689e07190961ae76f7b9a0", "text": "In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network. In this paper, we target high-bandwidth data distribution from a single source to a large number of receivers. Applications include large-file transfers and real-time multimedia streaming. For these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures. This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh. We construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network. Individual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.Key contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing. In a tree, it is critical that a node's parent delivers a high rate of application data to each child. In Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate.", "title": "" }, { "docid": "91fbf465741c6a033a00a4aa982630b4", "text": "This paper presents an integrated functional link interval type-2 fuzzy neural system (FLIT2FNS) for predicting the stock market indices. The hybrid model uses a TSK (Takagi–Sugano–Kang) type fuzzy rule base that employs type-2 fuzzy sets in the antecedent parts and the outputs from the Functional Link Artificial Neural Network (FLANN) in the consequent parts. Two other approaches, namely the integrated FLANN and type-1 fuzzy logic system and Local Linear Wavelet Neural Network (LLWNN) are also presented for a comparative study. Backpropagation and particle swarm optimization (PSO) learning algorithms have been used independently to optimize the parameters of all the forecasting models. To test the model performance, three well known stock market indices like the Standard’s & Poor’s 500 (S&P 500), Bombay stock exchange (BSE), and Dow Jones industrial average (DJIA) are used. The mean absolute percentage error (MAPE) and root mean square error (RMSE) are used to find out the performance of all the three models. Finally, it is observed that out of three methods, FLIT2FNS performs the best irrespective of the time horizons spanning from 1 day to 1 month. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "020182709e4360cf4ba93190b08ad909", "text": "Standard sequential generation methods assume a pre-specified generation order, such as text generation methods which generate words from left to right. In this work, we propose a framework for training models of text generation that operate in non-monotonic orders; the model directly learns good orders, without any additional annotation. Our framework operates by generating a word at an arbitrary position, and then recursively generating words to its left and then words to its right, yielding a binary tree. Learning is framed as imitation learning, including a coaching method which moves from imitating an oracle to reinforcing the policy’s own preferences. Experimental results demonstrate that using the proposed method, it is possible to learn policies which generate text without pre-specifying a generation order, while achieving competitive performance with conventional left-to-right generation.", "title": "" }, { "docid": "90e218a8ae79dc1d53e53d4eb63839b8", "text": "Doubly fed induction generator (DFIG) technology is the dominant technology in the growing global market for wind power generation, due to the combination of variable-speed operation and a cost-effective partially rated power converter. However, the DFIG is sensitive to dips in supply voltage and without specific protection to “ride-through” grid faults, a DFIG risks damage to its power converter due to overcurrent and/or overvoltage. Conventional converter protection via a sustained period of rotor-crowbar closed circuit leads to poor power output and sustained suppression of the stator voltages. A new minimum-threshold rotor-crowbar method is presented in this paper, improving fault response by reducing crowbar application periods to 11-16 ms, successfully diverting transient overcurrents, and restoring good power control within 45 ms of both fault initiation and clearance, thus enabling the DFIG to meet grid-code fault-ride-through requirements. The new method is experimentally verified and evaluated using a 7.5-kW test facility.", "title": "" }, { "docid": "f1d096392288d06a481f6f856e8b4aba", "text": "The ever-growing complexity of software systems coupled with their stringent availability requirements are challenging the manual management of software after its deployment. This has motivated the development of self-adaptive software systems. Self-adaptation endows a software system with the ability to satisfy certain objectives by automatically modifying its behavior at runtime. While many promising approaches for the construction of self-adaptive software systems have been developed, the majority of them ignore the uncertainty underlying the adaptation. This has been one of the key inhibitors to widespread adoption of self-adaption techniques in risk-averse real-world applications. Uncertainty in this setting is a vaguely understood term. In this paper, we characterize the sources of uncertainty in self-adaptive software system, and demonstrate its impact on the system’s ability to satisfy its objectives. We then provide an alternative notion of optimality that explicitly incorporates the uncertainty underlying the knowledge (models) used for decision making. We discuss the state-of-the-art for dealing with uncertainty in this setting, and conclude with a set of challenges, which provide a road map for future research.", "title": "" }, { "docid": "700eae4f09baf96bffe94d600098a5fa", "text": "Temporally precise, noninvasive control of activity in well-defined neuronal populations is a long-sought goal of systems neuroscience. We adapted for this purpose the naturally occurring algal protein Channelrhodopsin-2, a rapidly gated light-sensitive cation channel, by using lentiviral gene delivery in combination with high-speed optical switching to photostimulate mammalian neurons. We demonstrate reliable, millisecond-timescale control of neuronal spiking, as well as control of excitatory and inhibitory synaptic transmission. This technology allows the use of light to alter neural processing at the level of single spikes and synaptic events, yielding a widely applicable tool for neuroscientists and biomedical engineers.", "title": "" }, { "docid": "59c757aa28dcb770ecf5b01dc26ba087", "text": "Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google’s manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).", "title": "" }, { "docid": "b593637ff8f1692314108198086dede1", "text": "The problem addressed in this paper is that of orthogonally packing a given set of rectangular-shaped items into the minimum number of three-dimensional rectangular bins. The problem is strongly NP-hard and extremely diicult to solve in practice. Lower bounds are discussed, and it is proved that the asymptotic worst-case performance ratio of the continuous lower bound is 1 8. An exact algorithm for lling a single bin is developed, leading to the deenition of an exact branch-and-bound algorithm for the three-dimensional bin packing problem, which also incorporates original approximation algorithms. Extensive computational results, involving instances with up to 90 items, are presented: it is shown that many instances can be solved to optimality within a reasonable time limit.", "title": "" }, { "docid": "9920660432c2a2cf1f83ed6b8412b433", "text": "We propose a new approach for metric learning by framing it as learning a sparse combination of locally discriminative metrics that are inexpensive to generate from the training data. This flexible framework allows us to naturally derive formulations for global, multi-task and local metric learning. The resulting algorithms have several advantages over existing methods in the literature: a much smaller number of parameters to be estimated and a principled way to generalize learned metrics to new testing data points. To analyze the approach theoretically, we derive a generalization bound that justifies the sparse combination. Empirically, we evaluate our algorithms on several datasets against state-of-theart metric learning methods. The results are consistent with our theoretical findings and demonstrate the superiority of our approach in terms of classification performance and scalability.", "title": "" }, { "docid": "eed70d4d8bfbfa76382bfc32dd12c3db", "text": "Three studies tested basic assumptions derived from a theoretical model based on the dissociation of automatic and controlled processes involved in prejudice. Study 1 supported the model's assumption that highand low-prejudice persons are equally knowledgeable of the cultural stereotype. The model suggests that the stereotype is automatically activated in the presence of a member (or some symbolic equivalent) of the stereotyped group and that low-prejudice responses require controlled inhibition of the automatically activated stereotype. Study 2, which examined the effects of automatic stereotype activation on the evaluation of ambiguous stereotype-relevant behaviors performed by a race-unspecified person, suggested that when subjects' ability to consciously monitor stereotype activation is precluded, both highand low-prejudice subjects produce stereotype-congruent evaluations of ambiguous behaviors. Study 3 examined highand low-prejudice subjects' responses in a consciously directed thought-listing task. Consistent with the model, only low-prejudice subjects inhibited the automatically activated stereotype-congruent thoughts and replaced them with thoughts reflecting equality and negations of the stereotype. The relation between stereotypes and prejudice and implications for prejudice reduction are discussed.", "title": "" }, { "docid": "f59adaac85f7131bf14335dad2337568", "text": "Product search is an important part of online shopping. In contrast to many search tasks, the objectives of product search are not confined to retrieving relevant products. Instead, it focuses on finding items that satisfy the needs of individuals and lead to a user purchase. The unique characteristics of product search make search personalization essential for both customers and e-shopping companies. Purchase behavior is highly personal in online shopping and users often provide rich feedback about their decisions (e.g. product reviews). However, the severe mismatch found in the language of queries, products and users make traditional retrieval models based on bag-of-words assumptions less suitable for personalization in product search. In this paper, we propose a hierarchical embedding model to learn semantic representations for entities (i.e. words, products, users and queries) from different levels with their associated language data. Our contributions are three-fold: (1) our work is one of the initial studies on personalized product search; (2) our hierarchical embedding model is the first latent space model that jointly learns distributed representations for queries, products and users with a deep neural network; (3) each component of our network is designed as a generative model so that the whole structure is explainable and extendable. Following the methodology of previous studies, we constructed personalized product search benchmarks with Amazon product data. Experiments show that our hierarchical embedding model significantly outperforms existing product search baselines on multiple benchmark datasets.", "title": "" }, { "docid": "d8056ee6b9d1eed4bc25e302c737780c", "text": "This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for Web pages independent of their textual content, solely based on the hyperlink structure of the Web. PageRank is typically used as a Web Search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much more complex challenge. Recently, significant effort has been invested in building sets of personalized PageRank vectors. PageRank is also used in many diverse applications other than ranking. Below we are interested in the theoretical foundations of the PageRank formulation, in accelerating of PageRank computing, in the effects of particular aspects of Web graph structure on optimal organization of computations, and in PageRank stability. We also review alternative models that lead to authority indices similar to PageRank and the role of such indices in applications other than Web Search. We also discuss link-based search personalization and outline some aspects of PageRank infrastructure from associated measures of convergence to link preprocessing. Content", "title": "" } ]
scidocsrr
84eca8b63ca4a422db9596d147e426d1
Deploying PAWS: Field Optimization of the Protection Assistant for Wildlife Security
[ { "docid": "b64c48d4d2820e01490076c1b18cf32b", "text": "The availability of detailed environmental data, together with inexpensive and powerful computers, has fueled a rapid increase in predictive modeling of species environmental requirements and geographic distributions. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, absence data are not available for most species. In this paper, we introduce the use of the maximum entropy method (Maxent) for modeling species geographic distributions with presence-only data. Maxent is a general-purpose machine learning method with a simple and precise mathematical formulation, and it has a number of aspects that make it well-suited for species distribution modeling. In mmals: a diction emaining outline eceiver dicating ts present ues horder to investigate the efficacy of the method, here we perform a continental-scale case study using two Neotropical ma lowland species of sloth, Bradypus variegatus, and a small montane murid rodent, Microryzomys minutus. We compared Maxent predictions with those of a commonly used presence-only modeling method, the Genetic Algorithm for Rule-Set Pre (GARP). We made predictions on 10 random subsets of the occurrence records for both species, and then used the r localities for testing. Both algorithms provided reasonable estimates of the species’ range, far superior to the shaded maps available in field guides. All models were significantly better than random in both binomial tests of omission and r operating characteristic (ROC) analyses. The area under the ROC curve (AUC) was almost always higher for Maxent, in better discrimination of suitable versus unsuitable areas for the species. The Maxent modeling approach can be used in i form for many applications with presence-only datasets, and merits further research and development. © 2005 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "adb02577e7fba530c2406fbf53571d14", "text": "Event-related potentials (ERPs) recorded from the human scalp can provide important information about how the human brain normally processes information and about how this processing may go awry in neurological or psychiatric disorders. Scientists using or studying ERPs must strive to overcome the many technical problems that can occur in the recording and analysis of these potentials. The methods and the results of these ERP studies must be published in a way that allows other scientists to understand exactly what was done so that they can, if necessary, replicate the experiments. The data must then be analyzed and presented in a way that allows different studies to be compared readily. This paper presents guidelines for recording ERPs and criteria for publishing the results.", "title": "" }, { "docid": "ddff0a3c6ed2dc036cf5d6b93d2da481", "text": "Dense video captioning is a newly emerging task that aims at both localizing and describing all events in a video. We identify and tackle two challenges on this task, namely, (1) how to utilize both past and future contexts for accurate event proposal predictions, and (2) how to construct informative input to the decoder for generating natural event descriptions. First, previous works predominantly generate temporal event proposals in the forward direction, which neglects future video context. We propose a bidirectional proposal method that effectively exploits both past and future contexts to make proposal predictions. Second, different events ending at (nearly) the same time are indistinguishable in the previous works, resulting in the same captions. We solve this problem by representing each event with an attentive fusion of hidden states from the proposal module and video contents (e.g., C3D features). We further propose a novel context gating mechanism to balance the contributions from the current event and its surrounding contexts dynamically. We empirically show that our attentively fused event representation is superior to the proposal hidden states or video contents alone. By coupling proposal and captioning modules into one unified framework, our model outperforms the state-of-the-arts on the ActivityNet Captions dataset with a relative gain of over 100% (Meteor score increases from 4.82 to 9.65).", "title": "" }, { "docid": "07c43b1daa2520196e733b6efbd75a2b", "text": "Disruptive digital technologies empower customers to define how they would like to interact with organizations. Consequently, organizations often struggle to implement an appropriate omni-channel strategy (OCS) that both meets customers’ interaction preferences and can be operated efficiently. Despite this strong practical need, research on omni-channel management predominantly adopts a descriptive perspective. There is little prescriptive knowledge to support organizations in assessing the business value of OCSs and comparing them accordingly. To address this research gap, we propose an economic decision model that helps select an appropriate OCS, considering online and offline channels, the opening and closing of channels, non-sequential customer journeys, and customers’ channel preferences. Drawing from investment theory and value-based management, the decision model recommends implementing the OCS with the highest contribution to an organization’s long-term firm value. We validate the decision model using real-world data on the omni-channel environment of a German financial service provider.", "title": "" }, { "docid": "79798f4fbe3cffdf7c90cc5349bf0531", "text": "When a software system starts behaving abnormally during normal operations, system administrators resort to the use of logs, execution traces, and system scanners (e.g., anti-malwares, intrusion detectors, etc.) to diagnose the cause of the anomaly. However, the unpredictable context in which the system runs and daily emergence of new software threats makes it extremely challenging to diagnose anomalies using current tools. Host-based anomaly detection techniques can facilitate the diagnosis of unknown anomalies but there is no common platform with the implementation of such techniques. In this paper, we propose an automated anomaly detection framework (Total ADS) that automatically trains different anomaly detection techniques on a normal trace stream from a software system, raise anomalous alarms on suspicious behaviour in streams of trace data, and uses visualization to facilitate the analysis of the cause of the anomalies. Total ADS is an extensible Eclipse-based open source framework that employs a common trace format to use different types of traces, a common interface to adapt to a variety of anomaly detection techniques (e.g., HMM, sequence matching, etc.). Our case study on a modern Linux server shows that Total ADS automatically detects attacks on the server, shows anomalous paths in traces, and provides forensic insights.", "title": "" }, { "docid": "6a1073b72ef20fd59e705400dbdcc868", "text": "In today’s world, there is a continuous global need for more energy which, at the same time, has to be cleaner than the energy produced from the traditional generation technologies. This need has facilitated the increasing penetration of distributed generation (DG) technologies and primarily of renewable energy sources (RES). The extensive use of such energy sources in today’s electricity networks can indisputably minimize the threat of global warming and climate change. However, the power output of these energy sources is not as reliable and as easy to adjust to changing demand cycles as the output from the traditional power sources. This disadvantage can only be effectively overcome by the storing of the excess power produced by DG-RES. Therefore, in order for these new sources to become completely reliable as primary sources of energy, energy storage is a crucial factor. In this work, an overview of the current and future energy storage technologies used for electric power applications is carried out. Most of the technologies are in use today while others are still under intensive research and development. A comparison between the various technologies is presented in terms of the most important technological characteristics of each technology. The comparison shows that each storage technology is different in terms of its ideal network application environment and energy storage scale. This means that in order to achieve optimum results, the unique network environment and the specifications of the storage device have to be studied thoroughly, before a decision for the ideal storage technology to be selected is taken. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "720e417783f801e8f97531710b5eb779", "text": "In this article, a novel Vertical Take-Off and Landing (VTOL) Single Rotor Unmanned Aerial Vehicle (SR-UAV) will be presented. The SRUAV's design properties will be analysed in detail, with respect to technical novelties outlining the merits of such a conceptual approach. The system's model will be mathematically formulated, while a cascaded P-PI and PID-based control structure will be utilized in extensive simulation trials for the preliminary evaluation of the SR-UAV's attitude and translational performance.", "title": "" }, { "docid": "72a01822f817e238812f9722629cf4dc", "text": "Machine learning is increasingly used in high impact applications such as prediction of hospital re-admission, cancer screening or bio-medical research applications. As predictions become increasingly accurate, practitioners may be interested in identifying actionable changes to inputs in order to alter their class membership. For example, a doctor might want to know what changes to a patient’s status would predict him/her to not be re-admitted to the hospital soon. Szegedy et al. (2013b) demonstrated that identifying such changes can be very hard in image classification tasks. In fact, tiny, imperceptible changes can result in completely different predictions without any change to the true class label of the input. In this paper we ask the question if we can make small but meaningful changes in order to truly alter the class membership of images from a source class to a target class. To this end we propose deep manifold traversal, a method that learns the manifold of natural images and provides an effective mechanism to move images from one area (dominated by the source class) to another (dominated by the target class).The resulting algorithm is surprisingly effective and versatile. It allows unrestricted movements along the image manifold and only requires few images from source and target to identify meaningful changes. We demonstrate that the exact same procedure can be used to change an individual’s appearance of age, facial expressions or even recolor black and white images.", "title": "" }, { "docid": "51677dc68fac623815681ff45a91f1aa", "text": "A business process is a collection of activities to create more business values and its continuous improvement aligned with business goals is essential to survive in fast changing business environment. However, it is quite challenging to find out whether a change of business processes positively affects business goals or not, if there are problems in the changing, what the reasons of the problems are, what solutions exist for the problems and which solutions should be selected. Big data analytics along with a goal-orientation which helps find out insights from a large volume of data in a goal concept opens up a new way for an effective business process reengineering. In this paper, we suggest a novel modeling framework which consists of a conceptual modeling language, a process and a tool for effective business processes reengineering using big data analytics and a goal-oriented approach. The modeling language defines important concepts for business process reengineering with metamodels and shows the concepts with complementary views: Business Goal-Process-Big Analytics Alignment View, Transformational Insight View and Big Analytics Query View. Analyzers hypothesize problems and solutions of business processes by using the modeling language, and the problems and solutions will be validated by the results of Big Analytics Queries which supports not only standard SQL operation, but also analytics operation such as prediction. The queries are run in an execution engine of our tool on top of Spark which is one of big data processing frameworks. In a goal-oriented spirit, all concepts not only business goals and business processes, but also big analytics queries are considered as goals, and alternatives are explored and selections are made among the alternatives using trade-off analysis. To illustrate and validate our approach, we use an automobile logistics example, then compare previous work.", "title": "" }, { "docid": "71c31f41d116a51786a4e8ded2c5fb87", "text": "Targeting CTLA-4 represents a new type of immunotherapeutic approach, namely immune checkpoint inhibition. Blockade of CTLA-4 by ipilimumab was the first strategy to achieve a significant clinical benefit for late-stage melanoma patients in two phase 3 trials. These results fueled the notion of immunotherapy being the breakthrough strategy for oncology in 2013. Subsequently, many trials have been set up to test various immune checkpoint modulators in malignancies, not only in melanoma. In this review, recent new ideas about the mechanism of action of CTLA-4 blockade, its current and future therapeutic use, and the intensive search for biomarkers for response will be discussed. Immune checkpoint blockade, targeting CTLA-4 and/or PD-1/PD-L1, is currently the most promising systemic therapeutic approach to achieve long-lasting responses or even cure in many types of cancer, not just in patients with melanoma.", "title": "" }, { "docid": "9af656aff4c07feafb97fa7a2efa8967", "text": "The Global Positioning System (GPS) is an accurate positioning system. The GPS has an accuracy that varies from 4mm up to 11m. This project in lieu of thesis investigates the state of art of the GPS navigation and positioning for outdoor and indoor environments with a particular focus to the outdoor applications. This project includes an overview of GPS system, the GPS segments, the composition of signals from the GPS satellites, and the structure of the GPS data. A comprehensive review of the factors influencing the GPS accuracy such as GPS error sources, and Geometric Dilution of Precision “GDOP” are discussed. The significant up-to-date techniques and methods used for enhancement of the GPS solution such as Differential GPS “DGPS”, Carrier phase, Pseudolite, and Wide Area Differential GPS “WADGPS” are thoroughly described.", "title": "" }, { "docid": "987024b9cca47797813f27da08d9a7c6", "text": "Image segmentation plays a crucial role in many medical imaging applications by automating or facilitating the delineation of anatomical structures and other regions of interest. We present herein a critical appraisal of the current status of semi-automated and automated methods for the segmentation of anatomical medical images. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. The use of image segmentation in different imaging modalities is also described along with the difficulties encountered in each modality. We conclude with a discussion on the future of image segmentation methods in biomedical research.", "title": "" }, { "docid": "02824157bcfd4419bca45ad450b14cd1", "text": "Neural architecture for named entity recognition has achieved great success in the field of natural language processing. Currently, the dominating architecture consists of a bidirectional recurrent neural network (RNN) as the encoder and a conditional random field (CRF) as the decoder. In this paper, we propose a deformable stacked structure for named entity recognition, in which the connections between two adjacent layers are dynamically established. We evaluate the deformable stacked structure by adapting it to different layers. Our model achieves the state-of-the-art performances on the OntoNotes dataset.", "title": "" }, { "docid": "2f838f0268fb74912d264f35277fe589", "text": "OBJECTIVE\n: The objective of this study was to examine the histologic features of the labia minora, within the context of the female sexual response.\n\n\nMETHODS\n: Eight cadaver vulvectomy specimens were used for this study. All specimens were embedded in paraffin and were serially sectioned. Selected sections were stained with hematoxylin and eosin, elastic Masson trichrome, and S-100 antibody stains.\n\n\nRESULTS\n: The labia minora are thinly keratinized structures. The primary supporting tissue is collagen, with many vascular and neural elements structures throughout its core and elastin interspersed throughout.\n\n\nCONCLUSIONS\n: The labia minora are specialized, highly vascular folds of tissue with an abundance of neural elements. These features corroborate previous functional and observational data that the labia minora engorge with arousal and have a role in the female sexual response.", "title": "" }, { "docid": "b71197073ea33bb8c61973e8cd7d2775", "text": "This paper discusses the latest developments in the optimization and fabrication of 3.3kV SiC vertical DMOSFETs. The devices show superior on-state and switching losses compared to the even the latest generation of 3.3kV fast Si IGBTs and promise to extend the upper switching frequency of high-voltage power conversion systems beyond several tens of kHz without the need to increase part count with 3-level converter stacks of faster 1.7kV IGBTs.", "title": "" }, { "docid": "f16d93249254118060ce81b2f92faca5", "text": "Radiologists are critically interested in promoting best practices in medical imaging, and to that end, they are actively developing tools that will optimize terminology and reporting practices in radiology. The RadLex® vocabulary, developed by the Radiological Society of North America (RSNA), is intended to create a unifying source for the terminology that is used to describe medical imaging. The RSNA Reporting Initiative has developed a library of reporting templates to integrate reusable knowledge, or meaning, into the clinical reporting process. This report presents the initial analysis of the intersection of these two major efforts. From 70 published radiology reporting templates, we extracted the names of 6,489 reporting elements. These terms were reviewed in conjunction with the RadLex vocabulary and classified as an exact match, a partial match, or unmatched. Of 2,509 unique terms, 1,017 terms (41%) matched exactly to RadLex terms, 660 (26%) were partial matches, and 832 reporting terms (33%) were unmatched to RadLex. There is significant overlap between the terms used in the structured reporting templates and RadLex. The unmatched terms were analyzed using the multidimensional scaling (MDS) visualization technique to reveal semantic relationships among them. The co-occurrence analysis with the MDS visualization technique provided a semantic overview of the investigated reporting terms and gave a metric to determine the strength of association among these terms.", "title": "" }, { "docid": "0b3ed0ce26999cb6188fb0c88eb483ab", "text": "We consider the problem of learning causal networks with int erventions, when each intervention is limited in size under Pearl’s Structural Equation Model with independent e rrors (SEM-IE). The objective is to minimize the number of experiments to discover the causal directions of all the e dges in a causal graph. Previous work has focused on the use of separating systems for complete graphs for this task. We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in t e worst case. In addition, we present a novel separating system construction, whose size is close to optimal and is ar guably simpler than previous work in combinatorics. We also develop a novel information theoretic lower bound on th e number of interventions that applies in full generality, including for randomized adaptive learning algorithms. For general chordal graphs, we derive worst case lower bound s o the number of interventions. Building on observations about induced trees, we give a new determinist ic adaptive algorithm to learn directions on any chordal skeleton completely. In the worst case, our achievable sche me is anα-approximation algorithm where α is the independence number of the graph. We also show that there exi st graph classes for which the sufficient number of experiments is close to the lower bound. In the other extreme , there are graph classes for which the required number of experiments is multiplicativelyα away from our lower bound. In simulations, our algorithm almost always performs very c lose to the lower bound, while the approach based on separating systems for complete graphs is significantly wor se for random chordal graphs.", "title": "" }, { "docid": "4fb6b884b22962c6884bd94f8b76f6f2", "text": "This paper describes a novel motion estimation algorithm for floating base manipulators that utilizes low-cost inertial measurement units (IMUs) containing a three-axis gyroscope and a three-axis accelerometer. Four strap-down microelectromechanical system (MEMS) IMUs are mounted on each link to form a virtual IMU whose body's fixed frame is located at the center of the joint rotation. An extended Kalman filter (EKF) and a complementary filter are used to develop a virtual IMU by fusing together the output of four IMUs. The novelty of the proposed algorithm is that no forward kinematic model that requires data flow from previous joints is needed. The measured results obtained from the planar motion of a hydraulic arm show that the accuracy of the estimation of the joint angle is within ± 1 degree and that the root mean square error is less than 0.5 degree.", "title": "" }, { "docid": "1edb5f3179ebfc33922e12a0c2eea294", "text": "PURPOSE OF REVIEW\nThis review discusses the rational development of guidelines for the management of neonatal sepsis in developing countries.\n\n\nRECENT FINDINGS\nDiagnosis of neonatal sepsis with high specificity remains challenging in developing countries. Aetiology data, particularly from rural, community-based studies, are very limited, but molecular tests to improve diagnostics are being tested in a community-based study in South Asia. Antibiotic susceptibility data are limited, but suggest reducing susceptibility to first-and second-line antibiotics in both hospital and community-acquired neonatal sepsis. Results of clinical trials in South Asia and sub-Saharan Africa assessing feasibility of simplified antibiotic regimens are awaited.\n\n\nSUMMARY\nEffective management of neonatal sepsis in developing countries is essential to reduce neonatal mortality and morbidity. Simplified antibiotic regimens are currently being examined in clinical trials, but reduced antimicrobial susceptibility threatens current empiric treatment strategies. Improved clinical and microbiological surveillance is essential, to inform current practice, treatment guidelines, and monitor implementation of policy changes.", "title": "" }, { "docid": "7568cb435d0211248e431d865b6a477e", "text": "We propose prosody embeddings for emotional and expressive speech synthesis networks. The proposed methods introduce temporal structures in the embedding networks, thus enabling fine-grained control of the speaking style of the synthesized speech. The temporal structures can be designed either on the speech side or the text side, leading to different control resolutions in time. The prosody embedding networks are plugged into end-to-end speech synthesis networks and trained without any other supervision except for the target speech for synthesizing. It is demonstrated that the prosody embedding networks learned to extract prosodic features. By adjusting the learned prosody features, we could change the pitch and amplitude of the synthesized speech both at the frame level and the phoneme level. We also introduce the temporal normalization of prosody embeddings, which shows better robustness against speaker perturbations during prosody transfer tasks.", "title": "" }, { "docid": "721b6d09f51b268a30d8cf93b19ca7f4", "text": "Permanent-magnet (PM) motors with both magnets and armature windings on the stator (stator PM motors) have attracted considerable attention due to their simple structure, robust configuration, high power density, easy heat dissipation, and suitability for high-speed operations. However, current PM motors in industrial, residential, and automotive applications are still dominated by interior permanent-magnet motors (IPM) because the claimed advantages of stator PM motors have not been fully investigated and validated. Hence, this paper will perform a comparative study between a stator-PM motor, namely, a flux switching PM motor (FSPM), and an IPM which has been used in the 2004 Prius hybrid electric vehicle (HEV). For a fair comparison, the two motors are designed at the same phase current, current density, and dimensions including the stator outer diameter and stack length. First, the Prius-IPM is investigated by means of finite-element method (FEM). The FEM results are then verified by experimental results to confirm the validity of the methods used in this study. Second, the FSPM design is optimized and investigated based on the same method used for the Prius-IPM. Third, the electromagnetic performance and the material mass of the two motors are compared. It is concluded that FSPM has more sinusoidal back-EMF hence is more suitable for BLAC control. It also offers the advantage of smaller torque ripple and better mechanical integrity for safer and smoother operations. But the FSPM has disadvantages such as low magnet utilization ratio and high cost. It may not be able to compete with IPM in automotive and other applications where cost constraints are tight.", "title": "" } ]
scidocsrr
740b69f146260a8903d3d4e61e59150b
Blockage Robust and Efficient Scheduling for Directional mmWave WPANs
[ { "docid": "ea278850f00c703bdd73957c3f7a71ce", "text": "In this paper, we consider the directional multigigabit (DMG) transmission problem in IEEE 802.11ad wireless local area networks (WLANs) and design a random-access-based medium access control (MAC) layer protocol incorporated with a directional antenna and cooperative communication techniques. A directional cooperative MAC protocol, namely, D-CoopMAC, is proposed to coordinate the uplink channel access among DMG stations (STAs) that operate in an IEEE 802.11ad WLAN. Using a 3-D Markov chain model with consideration of the directional hidden terminal problem, we develop a framework to analyze the performance of the D-CoopMAC protocol and derive a closed-form expression of saturated system throughput. Performance evaluations validate the accuracy of the theoretical analysis and show that the performance of D-CoopMAC varies with the number of DMG STAs or beam sectors. In addition, the D-CoopMAC protocol can significantly improve system performance, as compared with the traditional IEEE 802.11ad MAC protocol.", "title": "" }, { "docid": "e57131739db1ed904cb0032dddd67804", "text": "We present a cross-layer modeling and design approach for multigigabit indoor wireless personal area networks (WPANs) utilizing the unlicensed millimeter (mm) wave spectrum in the 60 GHz band. Our approach accounts for the following two characteristics that sharply distinguish mm wave networking from that at lower carrier frequencies. First, mm wave links are inherently directional: directivity is required to overcome the higher path loss at smaller wavelengths, and it is feasible with compact, low-cost circuit board antenna arrays. Second, indoor mm wave links are highly susceptible to blockage because of the limited ability to diffract around obstacles such as the human body and furniture. We develop a diffraction-based model to determine network link connectivity as a function of the locations of stationary and moving obstacles. For a centralized WPAN controlled by an access point, it is shown that multihop communication, with the introduction of a small number of relay nodes, is effective in maintaining network connectivity in scenarios where single-hop communication would suffer unacceptable outages. The proposed multihop MAC protocol accounts for the fact that every link in the WPAN is highly directional, and is shown, using packet level simulations, to maintain high network utilization with low overhead.", "title": "" } ]
[ { "docid": "79564b938dde94306a2a142240bf30ea", "text": "Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting (MTC) dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large margins and achieves the overall best counting performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences. TasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem.", "title": "" }, { "docid": "435618f85e2ca71ac23b68f09413ad1e", "text": "> Context • The enactive paradigm in the cognitive sciences is establishing itself as a strong and comprehensive alternative to the computationalist mainstream. However, its own particular historical roots have so far been largely ignored in the historical analyses of the cognitive sciences. > Problem • In order to properly assess the enactive paradigm’s theoretical foundations in terms of their validity, novelty and potential future directions of development, it is essential for us to know more about the history of ideas that has led to the current state of affairs. > Method • The meaning of the disappearance of the field of cybernetics and the rise of second-order cybernetics is analyzed by taking a closer look at the work of representative figures for each of the phases – Rosenblueth, Wiener and Bigelow for the early wave of cybernetics, Ashby for its culmination, and von Foerster for the development of the second-order approach. > Results • It is argued that the disintegration of cybernetics eventually resulted in two distinct scientific traditions, one going from symbolic AI to modern cognitive science on the one hand, and the other leading from second-order cybernetics to the current enactive paradigm. > Implications • We can now understand that the extent to which the cognitive sciences have neglected their cybernetic parent is precisely the extent to which cybernetics had already carried the tendencies that would later find fuller expression in second-order cybernetics. >", "title": "" }, { "docid": "0cb3cdb1e44fd9171156ad46fdf2d2ed", "text": "In this paper, from the viewpoint of scene under standing, a three-layer Bayesian hierarchical framework (BHF) is proposed for robust vacant parking space detection. In practice, the challenges of vacant parking space inference come from dramatic luminance variations, shadow effect, perspective distortion, and the inter-occlusion among vehicles. By using a hidden labeling layer between an observation layer and a scene layer, the BHF provides a systematic generative structure to model these variations. In the proposed BHF, the problem of luminance variations is treated as a color classification problem and is tack led via a classification process from the observation layer to the labeling layer, while the occlusion pattern, perspective distortion, and shadow effect are well modeled by the relationships between the scene layer and the labeling layer. With the BHF scheme, the detection of vacant parking spaces and the labeling of scene status are regarded as a unified Bayesian optimization problem subject to a shadow generation model, an occlusion generation model, and an object classification model. The system accuracy was evaluated by using outdoor parking lot videos captured from morning to evening. Experimental results showed that the proposed framework can systematically determine the vacant space number, efficiently label ground and car regions, precisely locate the shadowed regions, and effectively tackle the problem of luminance variations.", "title": "" }, { "docid": "2a33f7e91a81435c41fbbaf18ca4b588", "text": "To enable light fields of large environments to be captured, they would have to be sparse, i.e. with a relatively large distance between views. Such sparseness, however, causes subsequent processing to be much more difficult than would be the case with dense light fields. This includes segmentation. In this paper, we address the problem of meaningful segmentation of a sparse planar light field, leading to segments that are coherent between views. In addition, uniquely our method does not make the assumption that all surfaces in the environment are perfect Lambertian reflectors, which further broadens its applicability. Our fully automatic segmentation pipeline leverages scene structure, and does not require the user to navigate through the views to fix inconsistencies. The key idea is to combine coarse estimations given by an over-segmentation of the scene into super-rays, with detailed ray-based processing. We show the merit of our algorithm by means of a novel way to perform intrinsic light field decomposition, outperforming state-of-the-art methods.", "title": "" }, { "docid": "faa5037145abef48d2acf5435df97bf2", "text": "This clinical report describes the rehabilitation of a patient with a history of mandibulectomy that involved the use of a fibula free flap and an implant-supported fixed complete denture. A recently introduced material, polyetherketoneketone (PEKK), was used as the framework material for the prosthesis, and the treatment produced favorable esthetic and functional results.", "title": "" }, { "docid": "a67df1737ca4e5cb41fe09ccb57c0e88", "text": "Generation of electricity from solar energy has gained worldwide acceptance due to its abundant availability and eco-friendly nature. Even though the power generated from solar looks to be attractive; its availability is subjected to variation owing to many factors such as change in irradiation, temperature, shadow etc. Hence, extraction of maximum power from solar PV using Maximum Power Point Tracking (MPPT) method was the subject of study in the recent past. Among many methods proposed, Hill Climbing and Incremental Conductance MPPT methods were popular in reaching Maximum Power under constant irradiation. However, these methods show large steady state oscillations around MPP and poor dynamic performance when subjected to change in environmental conditions. On the other hand, bioinspired algorithms showed excellent characteristics when dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations. Hence, in this paper an attempt is made by applying modifications to Particle Swarm Optimization technique, with emphasis on initial value selection, for Maximum Power Point Tracking. The key features of this method include ability to track the global peak power accurately under change in environmental condition with almost zero steady state oscillations, faster dynamic response and easy implementation. Systematic evaluation has been carried out for different partial shading conditions and finally the results obtained are compared with existing methods. In addition, simulations results are validated via built-in hardware prototype. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 48 . Introduction Ever growing energy demand by mankind and the limited availbility of resources remain as a major challenge to the power sector ndustry. The need for renewable energy resources has been augented in large scale and aroused due to its huge availability nd pollution free operation. Among the various renewable energy esources, solar energy has gained worldwide recognition because f its minimal maintenance, zero noise and reliability. Because of he aforementioned advantages; solar energy have been widely sed for various applications, but not limited to, such as megawatt cale power plants, water pumping, solar home systems, commuPlease cite this article in press as: R. Venugopalan, et al., Modified Parti Tracking for uniform and under partial shading condition, Appl. Soft C ication satellites, space vehicles and reverse osmosis plants [1]. owever, power generation using solar energy still remain uncerain, despite of all the efforts, due to various factors such as poor ∗ Corresponding author at: SELECT, VIT University, Vellore, Tamilnadu 632014, ndia. Tel.: +91 9600117935; fax: +91 9490113830. E-mail address: sudhakar.babu2013@vit.ac.in (T. Sudhakarbabu). ttp://dx.doi.org/10.1016/j.asoc.2015.05.029 568-4946/© 2015 Published by Elsevier B.V. 49 50 51 52 conversion efficiency, high installation cost and reduced power output under varying environmental conditions. Further, the characteristics of solar PV are non-linear in nature imposing constraints on solar power generation. Therefore, to maximize the power output from solar PV and to enhance the operating efficiency of the solar photovoltaic system, Maximum Power Point Tracking (MPPT) algorithms are essential [2]. Various MPPT algorithms [3–5] have been investigated and reported in the literature and the most popular ones are Fractional Open Circuit Voltage [6–8], Fractional Short Circuit Current [9–11], Perturb and Observe (P&O) [12–17], Incremental Conductance (Inc. Cond.) [18–22], and Hill Climbing (HC) algorithm [23–26]. In fractional open circuit voltage, and fractional short circuit current method; its performance depends on an approximate linear correlation between Vmpp, Voc and Impp, Isc values. However, the above relation is not practically valid; hence, exact value of Maximum cle Swarm Optimization technique based Maximum Power Point omput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.05.029 Power Point (MPP) cannot be assured. Perturb and Observe (P&O) method works with the voltage perturbation based on present and previous operating power values. Regardless of its simple structure, its efficiency principally depends on the tradeoff between the 53 54 55 56 ARTICLE IN G Model ASOC 2982 1–12 2 R. Venugopalan et al. / Applied Soft C Nomenclature IPV Current source Rs Series resistance Rp Parallel resistance VD diode voltage ID diode current I0 leakage current Vmpp voltage at maximum power point Voc open circuit voltage Impp current at maximum power point Isc short circuit current Vmpn nominal maximum power point voltage at 1000 W/m2 Npp number of parallel PV modules Nss number of series PV modules w weight factor c1 acceleration factor c2 acceleration factor pbest personal best position gbest global best position Vt thermal voltage K Boltzmann constant T temperature q electron charge Ns number of cells in series Vocn nominal open circuit voltage at 1000W/m2 G irradiation Gn nominal Irradiation Kv voltage temperature coefficient dT difference in temperature RLmin minimum value of load at output RLmax maximum value of load at output Rin internal resistance of the PV module RPVmin minimum reflective impedance of PV array RPVmax maximum reflective impedance of PV array R equivalent output load resistance t M o w t b A c M h n ( e a i p p w a u t H o i 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 o b converter efficiency racking speed and the steady state oscillations in the region of PP [15]. Incremental Conductance (Inc. Cond.) algorithm works n the principle of comparing ratios of Incremental Conductance ith instantaneous conductance and it has the similar disadvanage as that of P&O method [20,21]. HC method works alike P&O ut it is based on the perturbation of duty cycle of power converter. ll these traditional methods have the following disadvantages in ommon; reduced efficiency and steady state oscillations around PP. Realizing the above stated drawbacks; various researchers ave worked on applying certain Artificial Intelligence (AI) techiques like Neural Network (NN) [27,28] and Fuzzy Logic Control FLC) [29,30]. However, these techniques require periodic training, normous volume of data for training, computational complexity nd large memory capacity. Application of aforementioned MPPT methods for centralzed/string PV system is limited as they fail to track the global eak power under partial shading conditions. In addition, multile peaks occur in P-V curve under partial shading condition in hich the unique peak point i.e., global power peak should be ttained. However, when conventional MPPT techniques are used nder such conditions, they usually get trapped in any one of Please cite this article in press as: R. Venugopalan, et al., Modified Part Tracking for uniform and under partial shading condition, Appl. Soft C he local power peaks; drastically lowering the search efficiency. ence, to improve MPP tracking efficiency of conventional methds under PS conditions certain modifications have been proposed n Ref. [31]. Some used two stage approach to track the MPP [32]. PRESS omputing xxx (2015) xxx–xxx In the first stage, a wide search is performed which ensures that the operating point is moved closer to the global peak which is further fine-tuned in the second stage to reach the global peak value. Even though tracking efficiency has improved the method still fails to find the global maximum under all conditions. Another interesting approach is improving the Fibonacci search method for global MPP tracking [33]. Alike two stage method, this one also suffers from the same drawback that it does not guarantee accurate MPP tracking under all shaded conditions [34]. Yet another unique formulation combining DIRECT search method with P&O was put forward for global MPP tracking in Ref. [35]. Even though it is rendered effective, it is very complex and increases the computational burden. In the recent past, bio-inspired algorithms like GA, PSO and ACO have drawn considerable researcher’s attention for MPPT application; since they ensure sufficient class of accuracy while dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations [32,36–38]. Further, these methods offer various advantages such as computational simplicity, easy implementation and faster response. Among those methods, PSO method is largely discussed and widely used for solar MPPT due to the fact that it has simple structure, system independency, high adaptability and lesser number of tuning parameters. Further in PSO method, particles are allowed to move in random directions and the best values are evolved based on pbest and gbest values. This exploration process is very suitable for MPPT application. To improve the search efficiency of the conventional PSO method authors have proposed modifications to the existing algorithm. In Ref. [39], the authors have put forward an additional perception capability for the particles in search space so that best solutions are evolved with higher accuracy than PSO. However, details on implementation under partial shading condition are not discussed. Further, this method is only applicable when the entire module receive uniform insolation cannot be considered. Traditional PSO method is modified in Ref. [40] by introducing equations for velocity update and inertia. Even though the method showed better performance, use of extra coefficients in the conventional PSO search limits its advantage and increases the computational burden of the algorithm. Another approach", "title": "" }, { "docid": "45d51f472c38e6deea5f039f4aabb852", "text": "Recently, a debate has begun over whether in-class laptops aid or hinder learning. While some research demonstrates that laptops can be an important learning tool, anecdotal evidence suggests more and more faculty are banning laptops from their classrooms because of perceptions that they distract students and detract from learning. The current research examines the nature of in-class laptop use in a large lecture course and how that use is related to student learning. Students completed weekly surveys of attendance, laptop use, and aspects of the classroom environment. Results showed that students who used laptops in class spent considerable time multitasking and that the laptop use posed a signiWcant distraction to both users and fellow students. Most importantly, the level of laptop use was negatively related to several measures of student learning, including self-reported understanding of course material and overall course performance. The practical implications of these Wndings are discussed. © 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5f606838b7158075a4b13871c5b6ec89", "text": "The sentence is a standard textual unit in natural language processing applications. In many languages the punctuation mark that indicates the end-of-sentence boundary is ambiguous; thus the tokenizers of most NLP systems must be equipped with special sentence boundary recognition rules for every new text collection. As an alternative, this article presents an efficient, trainable system for sentence boundary disambiguation. The system, called Satz, makes simple estimates of the parts of speech of the tokens immediately preceding and following each punctuation mark, and uses these estimates as input to a machine learning algorithm that then classifies the punctuation mark. Satz is very fast both in training and sentence analysis, and its combined robustness and accuracy surpass existing techniques. The system needs only a small lexicon and training corpus, and has been shown to transfer quickly and easily from English to other languages, as demonstrated on French and German.", "title": "" }, { "docid": "f47ff71a0fb0363c5c27d2579ee1961a", "text": "The advent of 4G LTE has ushered in a growing demand for embedded antennas that can cover a wide range of frequency bands from 698 MHz to 2.69 GHz. A novel active antenna design is presented in this paper that is capable of covering a wide range of LTE bands while being constrained to a 1.8 cm3 volume. The antenna structure utilizes Ethertronics EtherChip 2.0 to add tunability to the antenna structure. The paper details the motivation behind developing the antenna and further discusses the fabrication of the active antenna architecture on an evaluation board and presents the measured results.", "title": "" }, { "docid": "617db9b325e211b45571db6fb8dc6c87", "text": "This paper gives a review of acoustic and ultrasonic optical fiber sensors (OFSs). The review covers optical fiber sensing methods for detecting dynamic strain signals, including general sound and acoustic signals, high-frequency signals, i.e., ultrasonic/ultrasound, and other signals such as acoustic emissions, and impact induced dynamic strain. Several optical fiber sensing methods are included, in an attempted to summarize the majority of optical fiber sensing methods used to date. The OFS include single fiber sensors and optical fiber devices, fiber-optic interferometers, and fiber Bragg gratings (FBGs). The single fiber and fiber device sensors include optical fiber couplers, microbend sensors, refraction-based sensors, and other extrinsic intensity sensors. The optical fiber interferometers include Michelson, Mach-Zehnder, Fabry-Perot, Sagnac interferometers, as well as polarization and model interference. The specific applications addressed in this review include optical fiber hydrophones, biomedical sensors, and sensors for nondestructive evaluation and structural health monitoring. Future directions are outlined and proposed for acousto-ultrasonic OFS.", "title": "" }, { "docid": "5390c432eec4be91bdc487e3e3043135", "text": "Sociability is considered to be important to the success of social software. The goal of the current study is to identify factors that affect the users’ perception of the sociability of social software and to examine the impact of sociability on the users’ attitude and behavior intentions. In a pilot study, 35 web users were interviewed to gain understanding of how they use social software to supplement their social life and to explore the possible factors that influence the users’ utilization of social software. In the first study, a questionnaire was developed, and 163 valid responses were collected. From the factor analysis results, seven important factors for social software design emerged, which accounts for 63.3% of the total variance. In the second study, 246 participants were asked to evaluate one of ten popular social applications with respect to the seven factors, their perceived sociability, and their attitudes and intention regarding the use of the applications. Results show that sociability is influenced by social climate, benefits and purposes, people, interaction richness, self-presentation, and support for formal interaction. System competency is not a sociability factor, but it significantly influences the user’s experience. Sociability and system competency, when combined, can predict 43% of users’ attitude towards social software and 51% of their intentions to use social software. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "54ef290e7c8fbc5c1bcd459df9bc4a06", "text": "Augmenter of Liver Regeneration (ALR) is a sulfhydryl oxidase carrying out fundamental functions facilitating protein disulfide bond formation. In mammals, it also functions as a hepatotrophic growth factor that specifically stimulates hepatocyte proliferation and promotes liver regeneration after liver damage or partial hepatectomy. Whether ALR also plays a role during vertebrate hepatogenesis is unknown. In this work, we investigated the function of alr in liver organogenesis in zebrafish model. We showed that alr is expressed in liver throughout hepatogenesis. Knockdown of alr through morpholino antisense oligonucleotide (MO) leads to suppression of liver outgrowth while overexpression of alr promotes liver growth. The small-liver phenotype in alr morphants results from a reduction of hepatocyte proliferation without affecting apoptosis. When expressed in cultured cells, zebrafish Alr exists as dimer and is localized in mitochondria as well as cytosol but not in nucleus or secreted outside of the cell. Similar to mammalian ALR, zebrafish Alr is a flavin-linked sulfhydryl oxidase and mutation of the conserved cysteine in the CxxC motif abolishes its enzymatic activity. Interestingly, overexpression of either wild type Alr or enzyme-inactive Alr(C131S) mutant promoted liver growth and rescued the liver growth defect of alr morphants. Nevertheless, alr(C131S) is less efficacious in both functions. Meantime, high doses of alr MOs lead to widespread developmental defects and early embryonic death in an alr sequence-dependent manner. These results suggest that alr promotes zebrafish liver outgrowth using mechanisms that are dependent as well as independent of its sulfhydryl oxidase activity. This is the first demonstration of a developmental role of alr in vertebrate. It exemplifies that a low-level sulfhydryl oxidase activity of Alr is essential for embryonic development and cellular survival. The dose-dependent and partial suppression of alr expression through MO-mediated knockdown allows the identification of its late developmental role in vertebrate liver organogenesis.", "title": "" }, { "docid": "63c1747c8803802e9d4cbc7d6231fa1a", "text": "Crowdfunding is an alternative model for project financing, whereby a large and dispersed audience participates through relatively small financial contributions, in exchange for physical, financial or social rewards. It is usually done via Internet-based platforms that act as a bridge between the crowd and the projects. Over the past few years, academics have explored this topic, both empirically and theoretically. However, the mixed findings and array of theories used have come to warrant a critical review of past works. To this end, we perform a systematic review of the literature on crowdfunding and seek to extract (1) the key management theories that have been applied in the context of crowdfunding and how these have been extended, and (2) the principal factors contributing to success for the different crowdfunding models, where success entails both fundraising and timely repayment. In the process, we offer a comprehensive definition of crowdfunding and identify avenues for future research based on the gaps and conflicting results in the literature.", "title": "" }, { "docid": "174e4ef91fa7e2528e0e5a2a9f1e0c7c", "text": "This paper describes the development of a human airbag system which is designed to reduce the impact force from slippage falling-down. A micro inertial measurement unit (muIMU) which is based on MEMS accelerometers and gyro sensors is developed as the motion sensing part of the system. A weightless recognition algorithm is used for real-time falling determination. With the algorithm, the microcontroller integrated with muIMU can discriminate falling-down motion from normal human motions and trigger an airbag system when a fall occurs. Our airbag system is designed to be fast response with moderate input pressure, i.e., the experimental response time is less than 0.3 second under 0.4 MPa (gage pressure). Also, we present our progress on development of the inflator and the airbags", "title": "" }, { "docid": "2d0121e8509d09571d8973da784440a5", "text": "In this paper we examine the suitability of BPMN for business process modelling, using the Workflow Patterns as an evaluation framework. The Workflow Patterns are a collection of patterns developed for assessing control-flow, data and resource capabilities in the area of Process Aware Information Systems (PAIS). In doing so, we provide a comprehensive evaluation of the capabilities of BPMN, and its strengths and weaknesses when utilised for business process modelling. The analysis provided for BPMN is part of a larger effort aiming at an unbiased and vendor-independent survey of the suitability and the expressive power of some mainstream process modelling languages. It is a sequel to an analysis series where languages like BPEL and UML 2.0 A.D are evaluated.", "title": "" }, { "docid": "6da08fd9cff229ffaabb7393035b815c", "text": "In medical decision making (classification, diagnosing, etc.) there are many situations where decision must be made effectively and reliably. Conceptual simple decision making models with the possibility of automatic learning are the most appropriate for performing such tasks. Decision trees are a reliable and effective decision making technique that provide high classification accuracy with a simple representation of gathered knowledge and they have been used in different areas of medical decision making. In the paper we present the basic characteristics of decision trees and the successful alternatives to the traditional induction approach with the emphasis on existing and possible future applications in medicine.", "title": "" }, { "docid": "45885c7c86a05d2ba3979b689f7ce5c8", "text": "Existing Markov Chain Monte Carlo (MCMC) methods are either based on generalpurpose and domain-agnostic schemes, which can lead to slow convergence, or problem-specific proposals hand-crafted by an expert. In this paper, we propose ANICE-MC, a novel method to automatically design efficient Markov chain kernels tailored for a specific domain. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo.", "title": "" }, { "docid": "3800853b95bad046a25f76ede85ba51c", "text": "Tendon driven mechanisms have been considered in robotic design for several decades. They provide lightweight end effectors with high dynamics. Using remote actuators it is possible to free more space for mechanics or electronics. Nevertheless, lightweight mechanism are fragile and unfortunately their control software can not protect them during the very first instant of an impact. Compliant mechanisms address this issue, providing a mechanical low pass filter, increasing the time available before the controller reacts. Using adjustable stiffness elements and an antagonistic architecture, the joint stiffness can be adjusted by variation of the tendon pre-tension. In this paper, the fundamental equations of m antagonistic tendon driven mechanisms are reviewed. Due to limited tendon forces the maximum torque and the maximum acheivable stiffness are dependent. This implies, that not only the torque workspace, or the stiffness workspace must be considered but also their interactions. Since the results are of high dimensionality, quality measures are necessary to provide a synthetic view. Two quality measures, similar to those used in grasp planning, are presented. They both provide the designer with a more precise insight into the mechanism.", "title": "" }, { "docid": "65a4ec1b13d740ae38f7b896edb2eaff", "text": "The problem of evolutionary network analysis has gained increasing attention in recent years, because of an increasing number of networks, which are encountered in temporal settings. For example, social networks, communication networks, and information networks continuously evolve over time, and it is desirable to learn interesting trends about how the network structure evolves over time, and in terms of other interesting trends. One challenging aspect of networks is that they are inherently resistant to parametric modeling, which allows us to truly express the edges in the network as functions of time. This is because, unlike multidimensional data, the edges in the network reflect interactions among nodes, and it is difficult to independently model the edge as a function of time, without taking into account its correlations and interactions with neighboring edges. Fortunately, we show that it is indeed possible to achieve this goal with the use of a matrix factorization, in which the entries are parameterized by time. This approach allows us to represent the edge structure of the network purely as a function of time, and predict the evolution of the network over time. This opens the possibility of using the approach for a wide variety of temporal network analysis problems, such as predicting future trends in structures, predicting links, and node-centric anomaly/event detection. This flexibility is because of the general way in which the approach allows us to express the structure of the network as a function of time. We present a number of experimental results on a number of temporal data sets showing the effectiveness of the approach.", "title": "" }, { "docid": "4dedc4c5a6a92e2f2fe8bc8a7476d187", "text": "Facial recognition systems are commonly used for verification and security purposes but the levels of accuracy are still being improved. Errors occurring in facial feature detection due to occlusions, pose and illumination changes can be compensated by the use of hog descriptors. The most reliable way to measure a face is by employing deep learning techniques. The final step is to train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. A python based application is being developed to recognize faces in all conditions.", "title": "" } ]
scidocsrr
9085f828419e37364fa63b7c0110c498
Faceshop: deep sketch-based face image editing
[ { "docid": "6008f42e840e85c935bc455e13e03e19", "text": "Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work, but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep learning has shown unique abilities to address hard problems. This motivated us to explore the use of deep neural networks (DNNs) in the context of photo editing. In this article, we formulate automatic photo adjustment in a manner suitable for this approach. We also introduce an image descriptor accounting for the local semantics of an image. Our experiments demonstrate that training DNNs using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on image semantics. We show that this yields results that are qualitatively and quantitatively better than previous work.", "title": "" }, { "docid": "2a89fb135d7c53bda9b1e3b8598663a5", "text": "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.", "title": "" } ]
[ { "docid": "8b3f597acb5a5a1333176a13e7dbbe43", "text": "Generalization bounds for time series prediction and other non-i.i.d. learning scenarios that can be found in the machine learning and statistics literature assume that observations come from a (strictly) stationary distribution. The first bounds for completely non-stationary setting were proved in [6]. In this work we present an extension of these results and derive novel algorithms for forecasting nonstationary time series. Our experimental results show that our algorithms significantly outperform standard autoregressive models commonly used in practice.", "title": "" }, { "docid": "9a04006d0328b838b9360a381401e436", "text": "In this paper, a novel approach for two-loop control of the DC-DC flyback converter in discontinuous conduction mode is presented by using sliding mode controller. The proposed controller can regulate output of the converter in wide range of input voltage and load resistance. In order to verify accuracy and efficiency of the developed sliding mode controller, proposed method is simulated in MATLAB/Simulink. It is shown that the developed controller has faster dynamic response compared with standard integrated circuit (MIC38C42-5) based regulators.", "title": "" }, { "docid": "0c5143b222e1a8956dfb058b222ddc28", "text": "Partially observed control problems are a challenging aspect of reinforcement learning. We extend two related, model-free algorithms for continuous control – deterministic policy gradient and stochastic value gradient – to solve partially observed domains using recurrent neural networks trained with backpropagation through time. We demonstrate that this approach, coupled with long-short term memory is able to solve a variety of physical control problems exhibiting an assortment of memory requirements. These include the short-term integration of information from noisy sensors and the identification of system parameters, as well as long-term memory problems that require preserving information over many time steps. We also demonstrate success on a combined exploration and memory problem in the form of a simplified version of the well-known Morris water maze task. Finally, we show that our approach can deal with high-dimensional observations by learning directly from pixels. We find that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.", "title": "" }, { "docid": "b3160bf7e40ab6cee122894af276cead", "text": "This article describes existing and expected benefits of the SP theory of intelligence, and some potential applications. The theory aims to simplify and integrate ideas across artificial intelligence, mainstream computing, and human perception and cognition, with information compression as a unifying theme. It combines conceptual simplicity with descriptive and explanatory power across several areas of computing and cognition. In the SP machine—an expression of the SP theory which is currently realized in the form of a computer model—there is potential for an overall simplification of computing systems, including software. The SP theory promises deeper insights and better solutions in several areas of application including, most notably, unsupervised learning, natural language processing, autonomous robots, computer vision, intelligent databases, software engineering, information compression, medical diagnosis and big data. There is also potential in areas such as the semantic web, bioinformatics, structuring of documents, the detection of computer viruses, data fusion, new kinds of computer, and the development of scientific theories. The theory promises seamless integration of structures and functions within and between different areas of application. The potential value, worldwide, of these benefits and applications is at least $190 billion each year. Further development would be facilitated by the creation of a high-parallel, open-source version of the SP machine, available to researchers everywhere.", "title": "" }, { "docid": "8a9603a10e5e02f6edfbd965ee11bbb9", "text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.", "title": "" }, { "docid": "1f914adf3655eda3abc9f6c7a987cfcc", "text": "Purpose – The paper aims to examine ways to reduce privacy risk and its effects so that adoption of e-services can be enhanced. Design/methodology/approach – Consumers that form a viable target market for an e-service are presented with the task of experiencing the e-service and expressing their attitudes and intentions toward it. Structural equation modeling is used to analyze the responses. Findings – The paper finds that consumer beliefs that the e-service will be easy to use and that the e-service provider is credible and capable reduce privacy risk and its effects, thus enhancing adoption likelihood. Research limitations/implications – The focus on a financial services product (online bill paying) suggests that similar research should be conducted with other high-risk e-services (such as those dealing with healthcare) and lower-risk e-services (such as subscription services and social networks). Practical implications – In addition to addressing consumers’ privacy risk directly, e-service providers can also reduce privacy risk and its effects by enhancing corporate credibility and perceived ease of use of the service. Increased assessments of privacy risk perceptions and efforts to reduce those perceptions will likely yield higher usage rates for e-services. Originality/value – The use of the technology acceptance model from information systems research, combined with a multi-faceted conceptualization of privacy risk, moves the examination of privacy risk to a higher level, particularly in light of the examination of the additional factors of perceived ease of use and corporate credibility.", "title": "" }, { "docid": "252f5488232f7437ff886b79e2e7014e", "text": "Typical video footage captured using an off-the-shelf camcorder suffers from limited dynamic range. This paper describes our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame. Our approach consists of three parts: automatic exposure control during capture, HDR stitching across neighboring frames, and tonemapping for viewing. HDR stitching requires accurately registering neighboring frames and choosing appropriate pixels for computing the radiance map. We show examples for a variety of dynamic scenes. We also show how we can compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.", "title": "" }, { "docid": "39030e91e22d222bf5f5e0eabbe02a38", "text": "Serratia marcescens has been recognized as an important cause of nosocomial and community-acquired infections. To our knowledge, we describe the first case of S. marcescens rhabdomyolysis, most probably related to acute cholecystitis and secondary bacteremia. The condition was successfully managed with levofloxacin. Keeping in mind the relevant morbidity and mortality associated with bacterial rhabdomyolysis, physicians should consider this possibility in patients with suspected or proven bacterial disease. We suggest S. marcescens should be regarded as a new causative agent of infectious rhabdomyolysis.", "title": "" }, { "docid": "b7062e40643ff1b879247a3f4ec3b07f", "text": "The question of whether there are different patterns of autonomic nervous system responses for different emotions is examined. Relevant conceptual issues concerning both the nature of emotion and the structure of the autonomic nervous system are discussed in the context of the development of research methods appropriate for studying this question. Are different emotional states associated with distinct patterns of autonomic nervous system (ANS) activity? This is an old question that is currently enjoying a modest revival in psychology. In the 1950s autonomic specificity was a key item on the agenda of the newly emerging discipline of psychophysiology, which saw as its mission the scientific exploration of the mind-body relationship using the tools of electrophysiological measurement. But the field of psychophysiology had the misfortune of coming of age during a period in which psychology drifted away from its physiological roots, a period in which psychology was dominated by learning, behaviourism, personality theory and later by cognition. Psychophysiology in the period between 1960 and 1980 reflected these broader trends in psychology by focusing on such issues as autonomic markers of perceptual states (e.g. orienting, stimulus processing), the interplay between personality factors and ANS responsivity, operant conditioning of autonomic functions, and finally, electrophysiological markers of cognitive states. Research on autonomic specificity in emotion became increasingly rare. Perhaps as a result of these historical trends in psychology, or perhaps because research on emotion and physiology is so difficult to do well, there 18 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION exists only a small body of studies on ANS specificity. Although almost all of these studies report some evidence for the existence of specificity, the prevailing zeitgeist has been that specificity has not been empirically established. At this point in time a review of the existing literature would not be very informative, for it would inevitably dissolve into a critique of methods. Instead, what I hope to accomplish in this chapter is to provide a new framework for thinking about ANS specificity, and to propose guidelines for carrying out research on this issue that will be cognizant of the recent methodological and theoretical advances that have been made both in psychophysiology and in research on emotion. Emotion as organization From the outset, the definition of emotion that underlies this chapter should be made explicit. For me the essential function of emotion is organization. The selection of emotion for preservation across time and species is based on the need for an efficient mechanism than can mobilize and organize disparate response systems to deal with environmental events that pose a threat to survival. In this view the prototypical context for human emotions is those situations in which a multi-system response must be organized quickly, where time is not available for the lengthy processes of deliberation, reformulation, planning and rehearsal; where a fine degree of co-ordination is required among systems as disparate as the muscles of the face and the organs of the viscera; and where adaptive behaviours that normally reside near the bottom of behavioural hierarchies must be instantaneously shifted to the top. Specificity versus undifferentiated arousal In this model of emotion as organization it is assumed that each component system is capable of a number of different responses, and that the emotion will guide the selection of responses from each system. Component systems differ in terms of the number of response possibilities. Thus, in the facial expressive system a selection must be made among a limited set of prototypic emotional expressions (which are but a subset of the enormous number of expressions the face is capable of assuming). A motor behaviour must also be selected from a similarly reduced set of responses consisting of fighting, fleeing, freezing, hiding, etc. All major theories of emotion would accept the proposition that activation of the ANS is one of the changes that occur during emotion. But theories differ as to how many different ANS patterns constitute the set of selection possibilities. At one extreme are those who would argue that there are only two ANS patterns: 'off' and 'on'. The 'on' ANS pattern, according to this view, consists EMOTION AND THE AUTONOMIC NERVOUS SYSTEM 19 of a high-level, global, diffuse ANS activation, mediated primarily by the sympathetic branch of the ANS. The manifestations of this pattern rapid and forcefulcontractions of the heart, rapid and deep breathing, increased systolic blood pressure, sweating, dry mouth, redirection of blood flow to large skeletal muscles, peripheral vasoconstriction, release of large amounts of epinephrine and norepinephrine from the adrenal medulla, and the resultant release of glucose from the liver are well known. Cannon (1927) described this pattern in some detail, arguing that this kind of high-intensity, undifferentiated arousal accompanied all emotions .. Among contemporary theories the notion of undifferentiated arousal is most clearly found in Mandler's theory (Mandler, 1975). However, undifferentiated arousal also played a major role in the extraordinarily influential cognitive/physiological theory of Schachter and Singer (1962). According to this theory, undifferentiated arousal is a necessary precondition for emotionan extremely plastic medium to be moulded by cognitive processes working in concert with the available cues from the social environment. At the other extreme are those who argue that there are a large number of patterns of ANS activation, each associated with a different emotion (or subset of emotions). This is the traditional specificity position. Its classic statement is often attributed to James (1884), although Alexander (1950) provided an even more radical version. The specificity position fuelled a number of experimental studies in the 1950s and 1960s, all attempting to identify some of these autonomic patterns (e.g. Averill, 1969; Ax, 1953; Funkenstein, King and Drolette, 1954; Schachter, 1957; Sternbach, 1962). Despite these studies, all of which reported support for ANS specificity, the undifferentiated arousal theory, especially as formulated by Schachter and Singer (1962) and their followers, has been dominant for a great many years. Is the ANS capable of specific action No matter how appealing the notion of ANS specificity might be in the abstract, there would be little reason to pursue it in the laboratory if the ANS were only capable of producing one pattern of arousal. There is no question that the pattern of high-level sympathetic arousal described earlier is one pattern that the ANS can produce. Cannon's arguments notwithstanding, I believe there now is quite ample evidence that the ANS is capable of a number of different patterns of activation. Whether these patterns are reliably associated with different emotions remains an empirical question, but the potential is surely there. A case in support of this potential for specificity can be based on: (a) the neural structure of the ANS; (b) the stimulation neurochemistry of the ANS; and (c) empirical findings. 20 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION", "title": "" }, { "docid": "619af7dc39e21690c1d164772711d7ed", "text": "The prevalence of smart mobile devices has promoted the popularity of mobile applications (a.k.a. apps). Supporting mobility has become a promising trend in software engineering research. This article presents an empirical study of behavioral service profiles collected from millions of users whose devices are deployed with Wandoujia, a leading Android app-store service in China. The dataset of Wandoujia service profiles consists of two kinds of user behavioral data from using 0.28 million free Android apps, including (1) app management activities (i.e., downloading, updating, and uninstalling apps) from over 17 million unique users and (2) app network usage from over 6 million unique users. We explore multiple aspects of such behavioral data and present patterns of app usage. Based on the findings as well as derived knowledge, we also suggest some new open opportunities and challenges that can be explored by the research community, including app development, deployment, delivery, revenue, etc.", "title": "" }, { "docid": "e281a8dc16b10dff80fad36d149a8a2f", "text": "We present a tree router for multichip systems that guarantees deadlock-free multicast packet routing without dropping packets or restricting their length. Multicast routing is required to efficiently connect massively parallel systems' computational units when each unit is connected to thousands of others residing on multiple chips, which is the case in neuromorphic systems. Our tree router implements this one-to-many routing by branching recursively-broadcasting the packet within a specified subtree. Within this subtree, the packet is only accepted by chips that have been programmed to do so. This approach boosts throughput because memory look-ups are avoided enroute, and keeps the header compact because it only specifies the route to the subtree's root. Deadlock is avoided by routing in two phases-an upward phase and a downward phase-and by restricting branching to the downward phase. This design is the first fully implemented wormhole router with packet-branching that can never deadlock. The design's effectiveness is demonstrated in Neurogrid, a million-neuron neuromorphic system consisting of sixteen chips. Each chip has a 256 × 256 silicon-neuron array integrated with a full-custom asynchronous VLSI implementation of the router that delivers up to 1.17 G words/s across the sixteen-chip network with less than 1 μs jitter.", "title": "" }, { "docid": "dbc8564d588199436686bf234514a20f", "text": "1. MOTIVATION AND SUMMARY Traditional Database Management Systems (DBMS) software is built on the concept of persistent data sets, that are stored reliably in stable storage and queried/updated several times throughout their lifetime. For several emerging application domains, however, data arrives and needs to be processed on a continuous ( ) basis, without the benefit of several passes over a static, persistent data image. Such continuous data streams arise naturally, for example, in the network installations of large Telecom and Internet service providers where detailed usage information (Call-Detail-Records (CDRs), SNMP/RMON packet-flow data, etc.) from different parts of the underlying network needs to be continuously collected and analyzed for interesting trends. Other applications that generate rapid, continuous and large volumes of stream data include transactions in retail chains, ATM and credit card operations in banks, financial tickers, Web server log records, etc. In most such applications, the data stream is actually accumulated and archived in the DBMS of a (perhaps, off-site) data warehouse, often making access to the archived data prohibitively expensive. Further, the ability to make decisions and infer interesting patterns on-line (i.e., as the data stream arrives) is crucial for several mission-critical tasks that can have significant dollar value for a large corporation (e.g., telecom fraud detection). As a result, recent years have witnessed an increasing interest in designing data-processing algorithms that work over continuous data streams, i.e., algorithms that provide results to user queries while looking at the relevant data items only once and in a fixed order (determined by the stream-arrival pattern). Two key parameters for query processing over continuous datastreams are (1) the amount of memory made available to the online algorithm, and (2) the per-item processing time required by the query processor. The former constitutes an important constraint on the design of stream processing algorithms, since in a typical streaming environment, only limited memory resources are available to the query-processing algorithms. In these situations, we need algorithms that can summarize the data stream(s) involved in a concise, but reasonably accurate, synopsis that can be stored in the allotted (small) amount of memory and can be used to provide approximate answers to user queries along with some reasonable guarantees on the quality of the approximation. Such approx-", "title": "" }, { "docid": "b34c63c9e58150fd0057b3fde59eff31", "text": "In this paper, we identify a new form of attack, called the Balance attack, against proof-of-work blockchain systems. The novelty of this attack consists of delaying network communications between multiple subgroups of nodes with balanced mining power. Our theoretical analysis captures the precise tradeoff between the network delay and the mining power of the attacker needed to double spend in Ethereum with high probability. We quantify our probabilistic analysis with statistics taken from the R3 consortium, and show that a single machine needs 20 minutes to attack the consortium. Finally, we run an Ethereum private chain in a distributed system with similar settings as R3 to demonstrate the feasibility of the approach, and discuss the application of the Balance attack to Bitcoin. Our results clearly confirm that main proof-of-work blockchain protocols can be badly suited for consortium blockchains.", "title": "" }, { "docid": "6973231128048ac2ca5bce0121bf6d95", "text": "PURPOSE\nThe aim of this study is to analyse the grip force distribution for different prosthetic hand designs and the human hand fulfilling a functional task.\n\n\nMETHOD\nA cylindrical object is held with a power grasp and the contact forces are measured at 20 defined positions. The distributions of contact forces in standard electric prostheses, in a experimental prosthesis with an adaptive grasp, and in human hands as a reference are analysed and compared. Additionally, the joint torques are calculated and compared.\n\n\nRESULTS\nContact forces of up to 24.7 N are applied by the middle and distal phalanges of the index finger, middle finger, and thumb of standard prosthetic hands, whereas forces of up to 3.8 N are measured for human hands. The maximum contact forces measured in a prosthetic hand with an adaptive grasp are 4.7 N. The joint torques of human hands and the adaptive prosthesis are comparable.\n\n\nCONCLUSIONS\nThe analysis of grip force distribution is proposed as an additional parameter to rate the performance of different prosthetic hand designs.", "title": "" }, { "docid": "9dacfccbbaa75947e4f4c09f6d54ed9e", "text": "In New Light commercial vehicle development, Engine is mounted at rear to have low Engine Noise and Vibration inside cabin. At the same time there is a need of high load carrying rear suspension to suit market requirement. In this paper complete design of leaf spring rear suspension for rear engine is discussed.This is nontraditional type of suspension with leaf spring application for rear engine vehicle. Traditionally, for light commercial vehicles, Engine is placed at front/middle giving huge space for traditional rear axle with differential inside. Design of rear suspension is verified and validated successfully for durability and handling by doing finite element analysis and testing.", "title": "" }, { "docid": "5f6b9395a3cd7af42c4822e2cf7eda7c", "text": "Unilateral below-knee amputees develop abnormal gait characteristics that include bilateral asymmetries and an elevated metabolic cost relative to non-amputees. In addition, long-term prosthesis use has been linked to an increased prevalence of joint pain and osteoarthritis in the intact leg knee. To improve amputee mobility, prosthetic feet that utilize elastic energy storage and return (ESAR) have been designed, which perform important biomechanical functions such as providing body support and forward propulsion. However, the prescription of appropriate design characteristics (e.g., stiffness) is not well-defined since its influence on foot function and important in vivo biomechanical quantities such as metabolic cost and joint loading remain unclear. The design of feet that improve these quantities could provide considerable advancements in amputee care. Therefore, the purpose of this study was to couple design optimization with dynamic simulations of amputee walking to identify the optimal foot stiffness that minimizes metabolic cost and intact knee joint loading. A musculoskeletal model and distributed stiffness ESAR prosthetic foot model were developed to generate muscle-actuated forward dynamics simulations of amputee walking. Dynamic optimization was used to solve for the optimal muscle excitation patterns and foot stiffness profile that produced simulations that tracked experimental amputee walking data while minimizing metabolic cost and intact leg internal knee contact forces. Muscle and foot function were evaluated by calculating their contributions to the important walking subtasks of body support, forward propulsion and leg swing. The analyses showed that altering a nominal prosthetic foot stiffness distribution by stiffening the toe and mid-foot while making the ankle and heel less stiff improved ESAR foot performance by offloading the intact knee during early to mid-stance of the intact leg and reducing metabolic cost. The optimal design also provided moderate braking and body support during the first half of residual leg stance, while increasing the prosthesis contributions to forward propulsion and body support during the second half of residual leg stance. Future work will be directed at experimentally validating these results, which have important implications for future designs of prosthetic feet that could significantly improve amputee care.", "title": "" }, { "docid": "de02785ae88c115e0c7077b79da5ab1c", "text": "This paper introduces a nueral network to solve the structure-from-motion (SfM) problem via feature bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature reprojection error. The whole pipeline is differentiable, so that the network can learn suitable feature representations that make the BA problem more trackable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates some bases depth maps according to the input image, and optimizes the final depth as a linear combination of these bases via feature BA. The bases depth map generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and machine learning (i.e. feature learning and basis depth map generator learning) to address the challenging SfM problem. Experiments on large scale real data prove the success of the proposed method.", "title": "" }, { "docid": "8e6ba93f41c4e59fe937b1d48dfb0f74", "text": "This paper aims at studying the impact of the colors of e-commerce websites on consumer memorization and buying intention. Based on a literature review we wish to introduce the theoretical and methodological bases addressing this issue. A conceptual model is proposed, showing the effects of the color of the e-commerce website and of its components Hue, Brightness and Saturation, on the behavioral responses of the consumer memorization and buying intention. These responses are conveyed by mood. Data collection was carried out during a laboratory experiment in order to control for the measurement of the colored appearance of e-commerce websites. Participants visited one of the 8 versions of a website designed for the research, selling music CDs. Data analysis using ANOVA, regressions and general linear models (GLM), show a significant effect of color on memorization, conveyed by mood. The interaction of hue and brightness, using chromatic colors for the dominant (background) and dynamic (foreground) supports memorization and buying intention, when contrast is based on low brightness. A negative mood infers better memorization but a decreasing buying intention. The managerial, methodological and theoretical implications, as well as the future ways of research were put in prospect.", "title": "" }, { "docid": "87dd019430e4345026b8de22f696c6e2", "text": "Although consumer research began focusing on emotional response to advertising during the 1980s (Goodstein, Edell, and Chapman Moore. 1990; Burke and Edell, 1989; Aaker, Stayman, and Vezina, 1988; Holbrook and Batra, 1988), perhaps one of the most practical measures of affective response has only recently emerged. Part of the difficulty in developing measures of emotional response stems from the complexity of emotion itself (Plummer and Leckenby, 1985). Researchers have explored several different measurement formats including: verbal self-reports (adjective checklists), physiological techniques, photodecks, and dial-turning instruments.", "title": "" }, { "docid": "5f35ed926a267dc9f80d110e87c06e5a", "text": "Face detection is one of the most studied topics in computer vision literature, not only because of the challenging nature of face as an object, but also due to the countless applications that require the application of face detection as a first step. During the past 15 years, tremendous progress has been made due to the availability of data in unconstrained capture conditions (so-called ’in-thewild’) through the Internet, the effort made by the community to develop publicly available benchmarks, as well as the progress in the development of robust computer vision algorithms. In this paper, we survey the recent advances in real-world face detection techniques, beginning with the seminal Viola-Jones face detector methodology. These techniques are roughly categorized into two general schemes: rigid templates, learned mainly via boosting based methods or by the application of deep neural networks, and deformable models that describe the face by its parts. Representative methods will be described in detail, along with a few additional successful methods that we briefly go through at the end. Finally, we survey the main databases used for the evaluation of face detection algorithms and recent benchmarking efforts, and discuss the future of face detection. c © 2014 Published by Elsevier Ltd.", "title": "" } ]
scidocsrr
1d694a9d287a8d6b7632ac2432cbe568
The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies
[ { "docid": "94aeb6dad00f174f89b709feab3db21f", "text": "We present a novel approach to the automatic acquisition of taxonomies or concept hierarchies from a text corpus. The approach is based on Formal Concept Analysis (FCA), a method mainly used for the analysis of data, i.e. for investigating and processing explicitly given information. We follow Harris’ distributional hypothesis and model the context of a certain term as a vector representing syntactic dependencies which are automatically acquired from the text corpus with a linguistic parser. On the basis of this context information, FCA produces a lattice that we convert into a special kind of partial order constituting a concept hierarchy. The approach is evaluated by comparing the resulting concept hierarchies with hand-crafted taxonomies for two domains: tourism and finance. We also directly compare our approach with hierarchical agglomerative clustering as well as with Bi-Section-KMeans as an instance of a divisive clustering algorithm. Furthermore, we investigate the impact of using different measures weighting the contribution of each attribute as well as of applying a particular smoothing technique to cope with data sparseness.", "title": "" }, { "docid": "d87abfd50876da09bce301831f71605f", "text": "Recent advances in topic models have explored complicated structured distributions to represent topic correlation. For example, the pachinko allocation model (PAM) captures arbitrary, nested, and possibly sparse correlations between topics using a directed acyclic graph (DAG). While PAM provides more flexibility and greater expressive power than previous models like latent Dirichlet allocation (LDA), it is also more difficult to determine the appropriate topic structure for a specific dataset. In this paper, we propose a nonparametric Bayesian prior for PAM based on a variant of the hierarchical Dirichlet process (HDP). Although the HDP can capture topic correlations defined by nested data structure, it does not automatically discover such correlations from unstructured data. By assuming an HDP-based prior for PAM, we are able to learn both the number of topics and how the topics are correlated. We evaluate our model on synthetic and real-world text datasets, and show that nonparametric PAM achieves performance matching the best of PAM without manually tuning the number of topics.", "title": "" } ]
[ { "docid": "0b40b90d13a02d9c867485529f91e05e", "text": "Approximation models (also known as metamodels) have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the sampling strategies used. Our goal in this paper is to investigate the general applicability of sequential sampling for creating global metamodels. Various sequential sampling approaches are reviewed and new approaches are proposed. The performances of these approaches are investigated against that of the onestage approach using a set of test problems with a variety of features. The potential usages of sequential sampling strategies are also discussed. NOMENCLATURE d Distance between two sample points ds Scaled distance between two sample points k Number of input variables n Number of sample points l Number of sample points generated at all the previous sampling stages m Number of sample points generated at the new sampling stage XD A sample set with n sample points Dn D D x x x ,..., , 2 1 XP A sample set with all l previous sample points Pl P P x x x ,..., , 2 1 XC A sample set with m new sample points Cm C C x x x ,..., , 2 1 R Correlation matrix INTRODUCTION Mathematical models have been widely used to simulate and analyze complex real world systems in the area of engineering design. These mathematical models, often implemented by computer codes (e.g., Computational Fluid Dynamics and Finite Element Analysis), could be computationally expensive. For example, one run of a finite element model for vehicle crashworthiness can take several hours. While the capacity of computer keeps increasing, to capture the real world systems more accurately, today’s simulation codes are even getting much more complex and unavoidably more expensive. The multidisciplinary nature of design and the need for incorporating uncertainty in design optimization have posed additional challenges. A widely used strategy is to utilize approximation models which are often referred to as metamodels as they provide a model of the model [1], replacing the expensive simulation model during the process. Recent studies on using metamodels in design applications include [2, 3, 4, 5, 6], etc. For dealing with multidisciplinary systems, Meckesheimer, et al. [7] presented a generic integration framework to integrate metamodels from multiple subsystems. An important research issue related to metamodeling is how to achieve a good accuracy of a metamodel with a reasonable number of sample points. While the accuracy of a metamodel is directly related to the metamodeling technique used and the properties of a problem itself, the types of sampling approaches also have direct influences on the performance of a metamodel. Koehler and Owen [8] provided a good review on various sampling approaches for computer experiments. Simpson, et al. [9] compared five sampling strategies and four metamodeling approaches in terms of their", "title": "" }, { "docid": "b2e71f9d11f29980ba1ac47fabc8b423", "text": "As security incidents continue to impact organisations, there is a growing demand for systems to be ‘forensic-ready’ - to maximise the potential use of evidence whilst minimising the costs of an investigation. Researchers have supported organisational forensic readiness efforts by proposing the use of policies and processes, aligning systems with forensics objectives and training employees. However, recent work has also proposed an alternative strategy for implementing forensic readiness called forensic-by-design. This is an approach that involves integrating requirements for forensics into relevant phases of the systems development lifecycle with the aim of engineering forensic-ready systems. While this alternative forensic readiness strategy has been discussed in the literature, no previous research has examined the extent to which organisations actually use this approach for implementing forensic readiness. Hence, we investigate the extent to which organisations consider requirements for forensics during systems development. We first assessed existing research to identify the various perspectives of implementing forensic readiness, and then undertook an online survey to investigate the consideration of requirements for forensics during systems development lifecycles. Our findings provide an initial assessment of the extent to which requirements for forensics are considered within organisations. We then use our findings, coupled with the literature, to identify a number of research challenges regarding the engineering of forensic-ready systems.", "title": "" }, { "docid": "f4b5d0cda325f8c896d0120122a9fa40", "text": "The aim of the present study was to evaluate the effectiveness of low-budget virtual reality (VR) exposure versus exposure in vivo in a between-group design in 33 patients suffering from acrophobia. The virtual environments used in treatment were exactly copied from the real environments used in the exposure in vivo program. VR exposure was found to be as effective as exposure in vivo on anxiety and avoidance as measured with the Acrophobia Questionnaire (AQ), the Attitude Towards Heights Questionnaire (ATHQ) and the Behavioral Avoidance Test (BAT). Results were maintained up to six months follow-up. The present study shows that VR exposure can be effective with relatively cheap hardware and software on stand-alone computers currently on the market. Further studies into the effectiveness of VR exposure are recommended in other clinical groups as agoraphobics and social phobics and studies in which VR exposure is compared with more emerging virtual worlds as presented in CAVE-type systems.", "title": "" }, { "docid": "d4fff9c75f3e8e699bbf5815b81e77b0", "text": "We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.", "title": "" }, { "docid": "40525527409abf3702690ed2eb51b200", "text": "Remote storage delivers a cost effective solution for data storage. If data is of a sensitive nature, it should be encrypted prior to outsourcing to ensure confidentiality; however, searching then becomes challenging. Searchable encryption is a well-studied solution to this problem. Many schemes only consider the scenario where users can search over the entirety of the encrypted data. In practice, sensitive data is likely to be classified according to an access control policy and different users should have different access rights. It is unlikely that all users have unrestricted access to the entire data set. Current schemes that consider multi-level access to searchable encryption are predominantly based on asymmetric primitives. We investigate symmetric solutions to multi-level access in searchable encryption where users have different access privileges to portions of the encrypted data and are not permitted to search over, or learn information about, data for which they are not authorised.", "title": "" }, { "docid": "ce17d4ecfe780d5dcc4e2910063c87f5", "text": "Article history: Transgender people face ma Received 14 December 2007 Received in revised form 31 December 2008 Accepted 20 January 2009 Available online 24 January 2009", "title": "" }, { "docid": "99ae7b63fafbfd8b2127ceed5542ac7f", "text": "Most ultrawideband (UWB) location systems already proposed for position estimation have only been individually evaluated for particular scenarios. For a fair performance comparison among different solutions, a common evaluation scenario would be desirable. In this paper, we compare three commercially available UWB systems (Ubisense, BeSpoon, and DecaWave) under the same experimental conditions, in order to do a critical performance analysis. We include the characterization of the quality of the estimated tag-to-sensor distances in an indoor industrial environment. This testing space includes areas under line-of-sight (LOS) and diverse non-LOS conditions caused by the reflection, propagation, and the diffraction of the UWB radio signals across different obstacles. The study also includes the analysis of the estimated azimuth and elevation angles for the Ubisense system, which is the only one that incorporates this feature using an array antenna at each sensor. Finally, we analyze the 3-D positioning estimation performance of the three UWB systems using a Bayesian filter implemented with a particle filter and a measurement model that takes into account bad range measurements and outliers. A final conclusion is drawn about which system performs better under these industrial conditions.", "title": "" }, { "docid": "ed509de8786ee7b4ba0febf32d0c87f7", "text": "Threat detection and analysis are indispensable processes in today's cyberspace, but current state of the art threat detection is still limited to specific aspects of modern malicious activities due to the lack of information to analyze. By measuring and collecting various types of data, from traffic information to human behavior, at different vantage points for a long duration, the viewpoint seems to be helpful to deeply inspect threats, but faces scalability issues as the amount of collected data grows, since more computational resources are required for the analysis. In this paper, we report our experience from operating the Hadoop platform, called MATATABI, for threat detections, and present the micro-benchmarks with four different backends of data processing in typical use cases such as log data and packet trace analysis. The benchmarks demonstrate the advantages of distributed computation in terms of performance. Our extensive use cases of analysis modules showcase the potential benefit of deploying our threat analysis platform.", "title": "" }, { "docid": "7bef5a19f6d8f71d4aa12194dd02d0c4", "text": "To build a natural sounding speech synthesis system, it is essential that the text processing component produce an appropriate sequence of phonemic units corresponding to an arbitrary input text. In this paper we discuss our efforts in addressing the issues of Font-to-Akshara mapping, pronunciation rules for Aksharas, text normalization in the context of building text-to-speech systems in Indian languages.", "title": "" }, { "docid": "45c9e0fc480ffc569720b04e789b0dfd", "text": "Background. Availability of large amount of clinical data is opening up new research avenues in a number of fields. An exciting field in this respect is healthcare, where secondary use of healthcare data is beginning to revolutionize healthcare. Except for availability of Big Data, both medical data from healthcare institutions (such as EMR data) and data generated from health and wellbeing devices (such as personal trackers), a significant contribution to this trend is also being made by recent advances on machine learning, specifically deep learning algorithms. Objectives. The objective of this work was to provide an overview of how automatic processing of Electronic Medical Records (EMR) data using Deep Learning techniques is contributing to understating of evolution of chronic diseases and prediction of risk of developing these diseases and associated complications. Methods. A review of the scientific literature was conducted using scientific databases Google Scholar, PubMed, IEEE, and ACM. Searches were focused on publications containing terms related to both Electronic Medical Records and Deep Learning and their synonyms. Results. The review has shown that a number of studies have reported results that provide unprecedented insights into chronic diseases through the use of deep learning methods to analyze EMR data. However, a major roadblock that may limit how effectively these paradigms can be utilized and adopted into clinical practice is in the interpretability of these models by medical professionals for whom many of them are", "title": "" }, { "docid": "9a8901f5787bf6db6900ad2b4b6291c5", "text": "MOTIVATION\nAs biological inquiry produces ever more network data, such as protein-protein interaction networks, gene regulatory networks and metabolic networks, many algorithms have been proposed for the purpose of pairwise network alignment-finding a mapping from the nodes of one network to the nodes of another in such a way that the mapped nodes can be considered to correspond with respect to both their place in the network topology and their biological attributes. This technique is helpful in identifying previously undiscovered homologies between proteins of different species and revealing functionally similar subnetworks. In the past few years, a wealth of different aligners has been published, but few of them have been compared with one another, and no comprehensive review of these algorithms has yet appeared.\n\n\nRESULTS\nWe present the problem of biological network alignment, provide a guide to existing alignment algorithms and comprehensively benchmark existing algorithms on both synthetic and real-world biological data, finding dramatic differences between existing algorithms in the quality of the alignments they produce. Additionally, we find that many of these tools are inconvenient to use in practice, and there remains a need for easy-to-use cross-platform tools for performing network alignment.", "title": "" }, { "docid": "1ade1bea5fece2d1882c6b6fac1ef63e", "text": "Probe-based confocal laser endomicroscopy is a recent tissue imaging technology that requires placing a probe in contact with the tissue to be imaged and provides real time images with a microscopic resolution. Additionally, generating adequate probe movements to sweep the tissue surface can be used to reconstruct a wide mosaic of the scanned region while increasing the resolution which is appropriate for anatomico-pathological cancer diagnosis. However, properly controlling the motion along the scanning trajectory is a major problem. Indeed, the tissue exhibits deformations under friction forces exerted by the probe leading to deformed mosaics. In this paper we propose a visual servoing approach for controlling the probe movements relative to the tissue while rejecting the tissue deformation disturbance. The probe displacement with respect to the tissue is firstly estimated using the confocal images and an image registration real-time algorithm. Secondly, from this real-time image-based position measurement, the probe motion is controlled thanks to a simple proportional-integral compensator and a feedforward term. Ex vivo experiments using a Stäubli TX40 robot and a Mauna Kea Technologies Cellvizio imaging device demonstrate the effectiveness of the approach on liver and muscle tissue.", "title": "" }, { "docid": "bb9f5ab961668b8aac5f786d33fb7e1f", "text": "The process that resulted in the diagnostic criteria for posttraumatic stress disorder (PTSD) in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5; American Psychiatric Association; ) was empirically based and rigorous. There was a high threshold for any changes in any DSM-IV diagnostic criterion. The process is described in this article. The rationale is presented that led to the creation of the new chapter, \"Trauma- and Stressor-Related Disorders,\" within the DSM-5 metastructure. Specific issues discussed about the DSM-5 PTSD criteria themselves include a broad versus narrow PTSD construct, the decisions regarding Criterion A, the evidence supporting other PTSD symptom clusters and specifiers, the addition of the dissociative and preschool subtypes, research on the new criteria from both Internet surveys and the DSM-5 field trials, the addition of PTSD subtypes, the noninclusion of complex PTSD, and comparisons between DSM-5 versus the World Health Association's forthcoming International Classification of Diseases (ICD-11) criteria for PTSD. The PTSD construct continues to evolve. In DSM-5, it has moved beyond a narrow fear-based anxiety disorder to include dysphoric/anhedonic and externalizing PTSD phenotypes. The dissociative subtype may open the way to a fresh approach to complex PTSD. The preschool subtype incorporates important developmental factors affecting the expression of PTSD in young children. Finally, the very different approaches taken by DSM-5 and ICD-11 should have a profound effect on future research and practice.", "title": "" }, { "docid": "2321500a01873c1bc7cf3e0e0bdf6d41", "text": "Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution.", "title": "" }, { "docid": "a4d315e5cff107329a603c19177259f1", "text": "Despite the fact that different studies have been performed using transcranial direct current stimulation (tDCS) in aphasia, so far, to what extent the stimulation of a cerebral region may affect the activity of anatomically connected regions remains unclear. The authors used a combination of transcranial magnetic stimulation (TMS) and electroencephalography (EEG) to explore brain areas' excitability modulation before and after active and sham tDCS. Six chronic aphasics underwent 3 weeks of language training coupled with tDCS over the right inferior frontal gyrus. To measure the changes induced by tDCS, TMS-EEG closed to the area stimulated with tDCS were calculated. A significant improvement after tDCS stimulation was found which was accompained by a modification of the EEG over the stimulated region.", "title": "" }, { "docid": "53920475aea52395045aaf687d2953fb", "text": "One of the components of abnormal social functioning in autism is an impaired ability to direct eye gaze onto other people's faces in social situations. Here, we investigated the relationship between gaze onto the eye and mouth regions of faces, and the visual information that was present within those regions. We used the \"Bubbles\" method to vary the facial information available on any given trial by revealing only small parts of the face, and measured the eye movements made as participants viewed these stimuli. Compared to ten IQ- and age-matched healthy controls, eight participants with autism showed less fixation specificity to the eyes and mouth, a greater tendency to saccade away from the eyes when information was present in those regions, and abnormal directionality of saccades. The findings provide novel detail to the abnormal way in which people with autism look at faces, an impairment that likely influences all subsequent face processing.", "title": "" }, { "docid": "11f9a54106c127cf87af4256a7f209c5", "text": "In recent years, Convolutional Neural Networks (CNNs) have revolutionized computer vision tasks. However, inference in current CNN designs is extremely computationally intensive. This has lead to an explosion of new accelerator architectures designed to reduce power consumption and latency [20]. In this paper, we design and implement a systolic array based architecture we call ConvAU to efficiently accelerate dense matrix multiplication operations in CNNs. We also train an 8-bit quantized version of Squeezenet[14] and evaluate our accelerator’s power consumption and throughput. Finally, we compare our results to the reported results for the K80 GPU and Google’s TPU. We find that ConvAU gives a 200x improvement in TOPs/W when compared to a NVIDIA K80 GPU and a 1.9x improvement when compared to the TPU.", "title": "" }, { "docid": "c0a3bb7720bd79d496bcf6281f444411", "text": "Do you dream to create good visualizations for your dataset simply like a Google search? If yes, our visionary systemDeepEye is committed to fulfill this task. Given a dataset and a keyword query, DeepEye understands the query intent, generates and ranks good visualizations. The user can pick the one he likes and do a further faceted search to easily navigate the visualizations. We detail the architecture of DeepEye, key components, as well as research challenges and opportunities.", "title": "" }, { "docid": "3fa63b98358afe9b16f983a4b3019ec4", "text": "In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human-robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human-robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.", "title": "" } ]
scidocsrr
45e37f2da938fc6aed9f846a189301e9
Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware
[ { "docid": "6a4cd21704bfbdf6fb3707db10f221a8", "text": "Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to a standard implementation of LSTMs on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.", "title": "" }, { "docid": "a6aa10b5adcf3241157919cb0e6863e9", "text": "Current neural networks are accumulating accolades for their performance on a variety of real-world computational tasks including recognition, classification, regression, and prediction, yet there are few scalable architectures that have emerged to address the challenges posed by their computation. This paper introduces Minitaur, an event-driven neural network accelerator, which is designed for low power and high performance. As an field-programmable gate array-based system, it can be integrated into existing robotics or it can offload computationally expensive neural network tasks from the CPU. The version presented here implements a spiking deep network which achieves 19 million postsynaptic currents per second on 1.5 W of power and supports up to 65 K neurons per board. The system records 92% accuracy on the MNIST handwritten digit classification and 71% accuracy on the 20 newsgroups classification data set. Due to its event-driven nature, it allows for trading off between accuracy and latency.", "title": "" }, { "docid": "3ab0776937023005c5715257a180ff77", "text": "Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.", "title": "" } ]
[ { "docid": "70991373ae71f233b0facd2b5dd1a0d3", "text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.", "title": "" }, { "docid": "b537af893b84a4c41edb829d45190659", "text": "We seek a complete description for the neurome of the Drosophila, which involves tracing more than 20,000 neurons. The currently available tracings are sensitive to background clutter and poor contrast of the images. In this paper, we present Tree2Tree2, an automatic neuron tracing algorithm to segment neurons from 3D confocal microscopy images. Building on our previous work in segmentation [1], this method uses an adaptive initial segmentation to detect the neuronal portions, as opposed to a global strategy that often results in under segmentation. In order to connect the disjoint portions, we use a technique called Path Search, which is based on a shortest path approach. An intelligent pruning step is also implemented to delete undesired branches. Tested on 3D confocal microscopy images of GFP labeled Drosophila neurons, the visual and quantitative results suggest that Tree2Tree2 is successful in automatically segmenting neurons in images plagued by background clutter and filament discontinuities.", "title": "" }, { "docid": "74d7e52e2187ff2ac1fd4c3ef28e2c82", "text": "This work is focused on processor allocation in shared-memory multiprocessor systems, where no knowledge of the application is available when applications are submitted. We perform the processor allocation taking into account the characteristics of the application measured at run-time. We want to demonstrate the importance of an accurate performance analysis and the criteria used to distribute the processors. With this aim, we present the SelfAnalyzer, an approach to dynamically analyzing the performance of applications (speedup, efficiency and execution time), and the Performance-Driven Processor Allocation (PDPA), a new scheduling policy that distributes processors considering both the global conditions of the system and the particular characteristics of running applications. This work also defends the importance of the interaction between the medium-term and the long-term scheduler to control the multiprogramming level in the case of the clairvoyant scheduling pol-icies1. We have implemented our proposal in an SGI Origin2000 with 64 processors and we have compared its performance with that of some scheduling policies proposed so far and with the native IRIX scheduling policy. Results show that the combination of the SelfAnalyzer+PDPA with the medium/long-term scheduling interaction outperforms the rest of the scheduling policies evaluated. The evaluation shows that in workloads where a simple equipartition performs well, the PDPA also performs well, and in extreme workloads where all the applications have a bad performance, our proposal can achieve a speedup of 3.9 with respect to an equipartition and 11.8 with respect to the native IRIX scheduling policy.", "title": "" }, { "docid": "693c29b040bb37142d95201589b24d0d", "text": "We are overwhelmed by the response to IJEIS. This response reflects the importance of the subject of enterprise information systems in global market and enterprise environments. We have some exciting special issues forthcoming in 2006. The first two issues will feature: (i) information and knowledge based approaches to improving performance in organizations, and (ii) hard and soft modeling tools and approaches to data and information management in real life projects and systems. IJEIS encourages researchers and practitioners to share their new ideas and results in enterprise information systems design and implementation, and also share relevant technical issues related to the development of such systems. This issue of IJEIS contains five articles dealing with an approach to evaluating ERP software within the acquisition process, uncertainty in ERP-controlled manufacturing systems, a review on IT business value research , methodologies for evaluating investment in electronic data interchange, and an ERP implementation model. An overview of the papers follows. The first paper, A Three-Dimensional Approach in Evaluating ERP Software within the Acquisition Process is authored by Verville, Bernadas and Halingten. This paper is based on an extensive study of the evaluation process of the acquisition of an ERP software of four organizations. Three distinct process types and activities were found: vendor's evaluation, functional evaluation , and technical evaluation. This paper provides a perspective on evaluation and sets it apart as modality for action, whose intent is to investigate and uncover by means of specific defined evaluative activities all issues pertinent to ERP software that an organization can use in its decision to acquire a solution that will meet its needs. The use of ERP is becoming increasingly prevalent in many modern manufacturing enterprises. However, knowledge of their performance when perturbed by several significant uncertainties simultaneously is not as widespread as it should have been. Koh, Gunasekaran, Saad and Arunachalam authored Uncertainty in ERP-Controlled Manufacturing Systems. The paper presents a developmental and experimental work on modeling uncertainty within an ERP multi-product, multi-level dependent demand manufacturing planning and scheduling system in a simulation model developed using ARENA/ SIMAN. To enumerate how uncertainty af", "title": "" }, { "docid": "20e8be9e9dbd62a56be0b64e7c2ae070", "text": "Stemmers attempt to reduce a word to its stem or root form and are used widely in information retrieval tasks to increase the recall rate. Most popular stemmers encode a large number of language-specific rules built over a length of time. Such stemmers with comprehensive rules are available only for a few languages. In the absence of extensive linguistic resources for certain languages, statistical language processing tools have been successfully used to improve the performance of IR systems. In this article, we describe a clustering-based approach to discover equivalence classes of root words and their morphological variants. A set of string distance measures are defined, and the lexicon for a given text collection is clustered using the distance measures to identify these equivalence classes. The proposed approach is compared with Porter's and Lovin's stemmers on the AP and WSJ subcollections of the Tipster dataset using 200 queries. Its performance is comparable to that of Porter's and Lovin's stemmers, both in terms of average precision and the total number of relevant documents retrieved. The proposed stemming algorithm also provides consistent improvements in retrieval performance for French and Bengali, which are currently resource-poor.", "title": "" }, { "docid": "76e5b2fec2d37b6df696e06186a350b3", "text": "Exponential growth in data volume originating from Internet of Things sources and information services drives the industry to develop new models and distributed tools to handle big data. In order to achieve strategic advantages, effective use of these tools and integrating results to their business processes are critical for enterprises. While there is an abundance of tools available in the market, they are underutilized by organizations due to their complexities. Deployment and usage of big data analysis tools require technical expertise which most of the organizations don't yet possess. Recently, the trend in the IT industry is towards developing prebuilt libraries and dataflow based programming models to abstract users from low-level complexities of these tools. After briefly analyzing trends in the literature and industry, this paper presents a conceptual framework which offers a higher level of abstraction to increase adoption of big data techniques as part of Industry 4.0 vision in future enterprises.", "title": "" }, { "docid": "47c96721db5ab8595ab3dcc2cf310954", "text": "Whereas people learn many different types of knowledge from diverse experiences over many years, most current machine learning systems acquire just a single function or data model from just a single data set. We propose a neverending learning paradigm for machine learning, to better reflect the more ambitious and encompassing type of learning performed by humans. As a case study, we describe the Never-Ending Language Learner (NELL), which achieves some of the desired properties of a never-ending learner, and we discuss lessons learned. NELL has been learning to read the web 24 hours/day since January 2010, and so far has acquired a knowledge base with over 80 million confidenceweighted beliefs (e.g., servedWith(tea, biscuits)). NELL has also learned millions of features and parameters that enable it to read these beliefs from the web. Additionally, it has learned to reason over these beliefs to infer new beliefs, and is able to extend its ontology by synthesizing new relational predicates. NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL.", "title": "" }, { "docid": "47e9515f703c840c38ab0c3095f48a3a", "text": "Hnefatafl is an ancient Norse game - an ancestor of chess. In this paper, we report on the development of computer players for this game. In the spirit of Blondie24, we evolve neural networks as board evaluation functions for different versions of the game. An unusual aspect of this game is that there is no general agreement on the rules: it is no longer much played, and game historians attempt to infer the rules from scraps of historical texts, with ambiguities often resolved on gut feeling as to what the rules must have been in order to achieve a balanced game. We offer the evolutionary method as a means by which to judge the merits of alternative rule sets", "title": "" }, { "docid": "f74da565b36a92a8c83c447c3890c521", "text": "Learning practical information communication technology skills such as network configuration and security planning requires hands-on experience with a number of different devices which may be unavailable or too costly to provide, especially for institutions under tight budget constraints. This paper describes how a specific open software technology, paravirtualization, can be used to set up open source virtual networking labs (VNLs) easily and at virtually no cost. The paper highlights how paravirtual labs can be adopted jointly by partner organizations, e.g., when the institution hosting the virtual lab provides hands-on training and students' skill evaluation as a service to partner institutions overseas. A practical VNL implementation, the open virtual lab (OVL), is used to describe the added value that open source VNLs can give to e-Learning frameworks, achieving a level of students' performance comparable or better than the one obtained when students directly interact with physical networking equipment.", "title": "" }, { "docid": "e8366d4e7f59fc32da001d3513cf8eee", "text": "Multiview LSA (MVLSA) is a generalization of Latent Semantic Analysis (LSA) that supports the fusion of arbitrary views of data and relies on Generalized Canonical Correlation Analysis (GCCA). We present an algorithm for fast approximate computation of GCCA, which when coupled with methods for handling missing values, is general enough to approximate some recent algorithms for inducing vector representations of words. Experiments across a comprehensive collection of test-sets show our approach to be competitive with the state of the art.", "title": "" }, { "docid": "adf530152b474c2b6147da07acf3d70d", "text": "One of the basic services in a distributed network is clock synchronization as it enables a palette of services, such as synchronized measurements, coordinated actions, or time-based access to a shared communication medium. The IEEE 1588 standard defines the Precision Time Protocol (PTP) and provides a framework to synchronize multiple slave clocks to a master by means of synchronization event messages. While PTP is capable for synchronization accuracies below 1 ns, practical synchronization approaches are hitting a new barrier due to asymmetric line delays. Although compensation fields for the asymmetry are present in PTP version 2008, no specific measures to estimate the asymmetry are defined in the standard. In this paper we present a solution to estimate the line asymmetry in 100Base-TX networks based on line swapping. This approach seems appealing for existing installations as most Ethernet PHYs have the line swapping feature built in, and it only delays the network startup, but does not alter the operation of the network. We show by an FPGA-based prototype system that our approach is able to improve the synchronization offset from more than 10 ns down to below 200 ps.", "title": "" }, { "docid": "3493568f4ee6776094b2f2403f1fba43", "text": "An isolated bidirectional full-bridge dc–dc converter with high conversion ratio, high output power, and soft start-up capability is proposed in this paper. The use of a capacitor, a diode, and a flyback converter can clamp the voltage spike caused by the current difference between the current-fed inductor and leakage inductance of the isolation transformer, and can reduce the current flowing through the active switches at the current-fed side. Operational principle of the proposed converter is first described, and then, the design equation is derived. A 1.5kW prototype with low-side voltage of 48 V and high-side voltage of 360 V has been implemented, from which experimental results have verified its feasibility. IndexTerms— Flybackconverter, isolated fullbridge bidirectional converter, soft start-up.", "title": "" }, { "docid": "b56a6fe9c9d4b45e9d15054004fac918", "text": "Code-switching refers to the phenomena of mixing of words or phrases from foreign languages while communicating in a native language by the multilingual speakers. Codeswitching is a global phenomenon and is widely accepted in multilingual communities. However, for training the language model (LM) for such tasks, a very limited code-switched textual resources are available as yet. In this work, we present an approach to reduce the perplexity (PPL) of Hindi-English code-switched data when tested over the LM trained on purely native Hindi data. For this purpose, we propose a novel textual feature which allows the LM to predict the code-switching instances. The proposed feature is referred to as code-switching factor (CS-factor). Also, we developed a tagger that facilitates the automatic tagging of the code-switching instances. This tagger is trained on a development data and assigns an equivalent class of foreign (English) words to each of the potential native (Hindi) words. For this study, the textual resource has been created by crawling the blogs from a couple of websites educating about the usage of the Internet. In the context of recognition of the code-switching data, the proposed technique is found to yield a substantial improvement in terms of PPL.", "title": "" }, { "docid": "ce0a855890322a98dffbb6f1a3af1c07", "text": "Gender reassignment (which includes psychotherapy, hormonal therapy and surgery) has been demonstrated as the most effective treatment for patients affected by gender dysphoria (or gender identity disorder), in which patients do not recognize their gender (sexual identity) as matching their genetic and sexual characteristics. Gender reassignment surgery is a series of complex surgical procedures (genital and nongenital) performed for the treatment of gender dysphoria. Genital procedures performed for gender dysphoria, such as vaginoplasty, clitorolabioplasty, penectomy and orchidectomy in male-to-female transsexuals, and penile and scrotal reconstruction in female-to-male transsexuals, are the core procedures in gender reassignment surgery. Nongenital procedures, such as breast enlargement, mastectomy, facial feminization surgery, voice surgery, and other masculinization and feminization procedures complete the surgical treatment available. The World Professional Association for Transgender Health currently publishes and reviews guidelines and standards of care for patients affected by gender dysphoria, such as eligibility criteria for surgery. This article presents an overview of the genital and nongenital procedures available for both male-to-female and female-to-male gender reassignment.", "title": "" }, { "docid": "4e55155fe0065d45ce94e1be4087cabf", "text": "A relatively new trend in Critical Infrastructures (e.g., power plants, nuclear plants, energy grids, etc.) is the massive migration from the classic model of isolated systems, to a system-of-systems model, where these infrastructures are intensifying their interconnections through Information and Communications Technology (ICT) means. The ICT core of these industrial installations is known as Supervisory Control And Data Acquisition Systems (SCADA). Traditional ICT security countermeasures (e.g., classic firewalls, anti-viruses and IDSs) fail in providing a complete protection to these systems since their needs are different from those of traditional ICT. This paper presents an innovative approach to Intrusion Detection in SCADA systems based on the concept of Critical State Analysis and State Proximity. The theoretical framework is supported by tests conducted with an Intrusion Detection System prototype implementing the proposed detection approach.", "title": "" }, { "docid": "87518b738a57fe28197f65af20199b0a", "text": "Crowdsourced clustering approaches present a promising way to harness deep semantic knowledge for clustering complex information. However, existing approaches have difficulties supporting the global context needed for workers to generate meaningful categories, and are costly because all items require human judgments. We introduce Alloy, a hybrid approach that combines the richness of human judgments with the power of machine algorithms. Alloy supports greater global context through a new \"sample and search\" crowd pattern which changes the crowd's task from classifying a fixed subset of items to actively sampling and querying the entire dataset. It also improves efficiency through a two phase process in which crowds provide examples to help a machine cluster the head of the distribution, then classify low-confidence examples in the tail. To accomplish this, Alloy introduces a modular \"cast and gather\" approach which leverages a machine learning backbone to stitch together different types of judgment tasks.", "title": "" }, { "docid": "89c2f51884e22446b523fab38e4cb34b", "text": "Coordinated intrusion, like DDoS, Worm outbreak and Botnet, is a major threat to network security nowadays and will continue to be a threat in the future. To ensure the Internet security, effective detection and mitigation for such attacks are indispensable. In this paper, we propose a novel collaborative intrusion prevention architecture, i.e. CIPA, aiming at confronting such coordinated intrusion behavior. CIPA is deployed as a virtual network of an artificial neural net over the substrate of networks. Taking advantage of the parallel and simple mathematical manipulation of neurons in a neural net, CIPA can disperse its lightweight computation power to the programmable switches of the substrate. Each programmable switch virtualizes one to several neurons. The whole neural net functions like an integrated IDS/IPS. This allows CIPA to detect distributed attacks on a global view. Meanwhile, it does not require high communication and computation overhead. It is scalable and robust. To validate CIPA, we have realized a prototype on Software-Defined Networks. We also conducted simulations and experiments. The results demonstrate that CIPA is effective.", "title": "" }, { "docid": "260e574e9108e05b98df7e4ed489e5fc", "text": "Why are we not living yet with robots? If robots are not common everyday objects, it is maybe because we have looked for robotic applications without considering with sufficient attention what could be the experience of interacting with a robot. This article introduces the idea of a value profile, a notion intended to capture the general evolution of our experience with different kinds of objects. After discussing value profiles of commonly used objects, it offers a rapid outline of the challenging issues that must be investigated concerning immediate, short-term and long-term experience with robots. Beyond science-fiction classical archetypes, the picture emerging from this analysis is the one of versatile everyday robots, autonomously developing in interaction with humans, communicating with one another, changing shape and body in order to be adapted to their various context of use. To become everyday objects, robots will not necessary have to be useful, but they will have to be at the origins of radically new forms of experiences.", "title": "" }, { "docid": "0d9affda4d9f7089d76a492676ab3f9e", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR' s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR' s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. The American Political Science Review is published by American Political Science Association. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org/joumals/apsa.html.", "title": "" }, { "docid": "6f3573570c92e90c4f0a557141d79c76", "text": "This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.", "title": "" } ]
scidocsrr
8d038b39eb8e6ae8530e205bb80a3c74
Software architecture: a travelogue
[ { "docid": "a98631b46893645a94a83995836dc71d", "text": "This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.", "title": "" } ]
[ { "docid": "e342178b5c8ee8a48add15fefa0ef5f8", "text": "A new scheme is proposed for the dual-band operation of the Wilkinson power divider/combiner. The dual band operation is achieved by attaching two central transmission line stubs to the conventional Wilkinson divider. It has simple structure and is suitable for distributed circuit implementation.", "title": "" }, { "docid": "d2694577861e75535e59e316bd6a9015", "text": "Despite being a new term, ‘fake news’ has evolved rapidly. This paper argues that it should be reserved for cases of deliberate presentation of (typically) false or misleading claims as news, where these are misleading by design. The phrase ‘by design’ here refers to systemic features of the design of the sources and channels by which fake news propagates and, thereby, manipulates the audience’s cognitive processes. This prospective definition is then tested: first, by contrasting fake news with other forms of public disinformation; second, by considering whether it helps pinpoint conditions for the (recent) proliferation of fake news. Résumé: En dépit de son utilisation récente, l’expression «fausses nouvelles» a évolué rapidement. Cet article soutient qu'elle devrait être réservée aux présentations intentionnelles d’allégations (typiquement) fausses ou trompeuses comme si elles étaient des nouvelles véridiques et où elles sont faussées à dessein. L'expression «à dessein» fait ici référence à des caractéristiques systémiques de la conception des sources et des canaux par lesquels les fausses nouvelles se propagent et par conséquent, manipulent les processus cognitifs du public. Cette définition prospective est ensuite mise à l’épreuve: d'abord, en opposant les fausses nouvelles à d'autres formes de désinformation publique; deuxièmement, en examinant si elle aide à cerner les conditions de la prolifération (récente) de fausses nou-", "title": "" }, { "docid": "e651af2be422e13548af7d3152d27539", "text": "A sample of 116 children (M=6 years 7 months) in Grade 1 was randomly assigned to experimental (n=60) and control (n=56) groups, with equal numbers of boys and girls in each group. The experimental group received a program aimed at improving representation and transformation of visuospatial information, whereas the control group received a substitute program. All children were administered mental rotation tests before and after an intervention program and a Global-Local Processing Strategies test before the intervention. The results revealed that initial gender differences in spatial ability disappeared following treatment in the experimental but not in the control group. Gender differences were moderated by strategies used to process visuospatial information. Intervention and processing strategies were essential in reducing gender differences in spatial abilities.", "title": "" }, { "docid": "6b236f1e123dd27e7c52392e8efa500d", "text": "An ordered probit regression model estimated using 15 years’ data is used to model English league football match results. As well as past match results data, the significance of the match for end-ofseason league outcomes; the involvement of the teams in cup competition; the geographical distance between the two teams’ home towns; and the average attendances of the two teams all contribute to the model’s performance. The model is used to test the weak-form efficiency of prices in the fixedodds betting market, and betting strategies with a positive expected return are identified.", "title": "" }, { "docid": "c33d7a61c21aba16e421953346e2e5cc", "text": "Many college students experience depression or anxiety but do not seek help due to the social stigma associated with psychological counseling services. Automatic techniques to classify social media messages based on the emotions they express can assist in the early detection of students in need of counseling. Supervised machine learning methods yield accurate results but require training datasets of text messages that have been labelled with the classes of emotions they express. Manually labeling a large corpus of Twitter messages is labor-intensive, error prone and time-consuming. Hashtags are keywords inserted into social media messages by their authors. In this paper, we investigate using hashtags as emotion labels and evaluate them through two user studies, one with psychology experts and the other with the general crowd. The study showed that the labels created by general crowd was inconsistent and unreliable. However, the labels generated by experts matched with hashtag labels in over 87% of Twitter messages, which indicates that hashtags are indeed good emotion labels. Leveraging the concept of hashtags as emotion labels, we develop Emotex, a supervised learning approach that classifies Twitter messages into the emotion classes they express. We show that Emotex correctly classifies the emotions expressed in over 90% of text messages.", "title": "" }, { "docid": "eba084c2730966f2a5b258f907ee78a6", "text": "With new defenses against traditional control-flow attacks like stack buffer overflows, attackers are increasingly using more advanced mechanisms to take control of execution. One common such attack is vtable hijacking, in which the attacker exploits bugs in C++ programs to overwrite pointers to the virtual method tables (vtables) of objects. We present a novel defense against this attack. The key insight of our approach is a new way of laying out vtables in memory through careful ordering and interleaving. Although this layout is very different from a traditional layout, it is backwards compatible with the traditional way of performing dynamic dispatch. Most importantly, with this new layout, checking the validity of a vtable at runtime becomes an efficient range check, rather than a set membership test. Compared to prior approaches that provide similar guarantees, our approach does not use any profiling information, has lower performance overhead (about 1%) and has lower code bloat overhead (about 1.7%).", "title": "" }, { "docid": "a9cfb59c0187466d64010a3f39ac0e30", "text": "Model-free Reinforcement Learning (RL) offers an attractive approach to learn control policies for highdimensional systems, but its relatively poor sample complexity often necessitates training in simulated environments. Even in simulation, goal-directed tasks whose natural reward function is sparse remain intractable for state-of-the-art model-free algorithms for continuous control. The bottleneck in these tasks is the prohibitive amount of exploration required to obtain a learning signal from the initial state of the system. In this work, we leverage physical priors in the form of an approximate system dynamics model to design a curriculum for a model-free policy optimization algorithm. Our Backward Reachability Curriculum (BaRC) begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamically-consistent manner once the policy optimization algorithm demonstrates sufficient performance. BaRC is general, in that it can accelerate training of any model-free RL algorithm on a broad class of goal-directed continuous control MDPs. Its curriculum strategy is physically intuitive, easy-to-tune, and allows incorporating physical priors to accelerate training without hindering the performance, flexibility, and applicability of the model-free RL algorithm. We evaluate our approach on two representative dynamic robotic learning problems and find substantial performance improvement relative to previous curriculum generation techniques and naı̈ve exploration strategies.", "title": "" }, { "docid": "1ff51e3f6b73aa6fe8eee9c1fb404e4e", "text": "The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.", "title": "" }, { "docid": "67a62792ba0283e84ace7937615d3090", "text": "Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to degrade the agent. To address these issues, we present Deep Dyna-Q, which to our knowledge is the first deep RL framework that integrates planning for task-completion dialogue policy learning. We incorporate into the dialogue agent a model of the environment, referred to as the world model, to mimic real user response and generate simulated experience. During dialogue policy learning, the world model is constantly updated with real user experience to approach real user behavior, and in turn, the dialogue agent is optimized using both real experience and simulated experience. The effectiveness of our approach is demonstrated on a movie-ticket booking task in both simulated and human-in-theloop settings1.", "title": "" }, { "docid": "7277ab3a4228a9f266549952fc668afd", "text": "Anomaly detection in a WSN is an important aspect of data analysis in order to identify data items that significantly differ from normal data. A characteristic of the data generated by a WSN is that the data distribution may alter over the lifetime of the network due to the changing nature of the phenomenon being observed. Anomaly detection techniques must be able to adapt to a non-stationary data distribution in order to perform optimally. In this survey, we provide a comprehensive overview of approaches to anomaly detection in a WSN and their operation in a non-stationary environment.", "title": "" }, { "docid": "b9b634c93f2cc216370a94128aeab596", "text": "Life-cycle models of labor supply predict a positive relationship between hours supplied and transitory changes in wages. We tested this prediction   ", "title": "" }, { "docid": "8e094cb05d16c73d7bf7c2cbb553873d", "text": "In this paper, the design of command to line-of-sight (CLOS) missile guidance law is addressed. Taking a three dimensional guidance model, the tracking control problem is formulated. To solve the target tracking problem, the feedback linearization controller is first designed. Although such control scheme possesses the simplicity property, but it presents the acceptable performance only in the absence of perturbations. In order to ensure the robustness properties against model uncertainties, a fuzzy adaptive algorithm is proposed with two parts including a fuzzy (Mamdani) system, whose rules are constructed based on missile guidance, and a so-called rule modifier to compensate the fuzzy rules, using the negative gradient method. Compared with some previous works, such control strategy provides a faster time response without large control efforts. The performance of feedback linearization controller is also compared with that of fuzzy adaptive strategy via various simulations.", "title": "" }, { "docid": "72a51dfdcdf5ff70c94922a048f218d1", "text": "We have synthesized thermodynamically metastable Ca2IrO4 thin-films on YAlO3 (110) substrates by pulsed laser deposition. The epitaxial Ca2IrO4 thin-films are of K2NiF4-type tetragonal structure. Transport and optical spectroscopy measurements indicate that the electronic structure of the Ca2IrO4 thin-films is similar to that of Jeff = 1/2 spin-orbit-coupled Mott insulator Sr2IrO4 and Ba2IrO4, with the exception of an increased gap energy. The gap increase is to be expected in Ca2IrO4 due to its increased octahedral rotation and tilting, which results in enhanced electron-correlation, U/W. Our results suggest that the epitaxial stabilization growth of metastable-phase thin-films can be used effectively for investigating layered iridates and various complex-oxide systems.", "title": "" }, { "docid": "397d6f645f5607140cf7d16597b8ec83", "text": "OBJECTIVES\nTo determine if differences between dyslexic and typical readers in their reading scores and verbal IQ are evident as early as first grade and whether the trajectory of these differences increases or decreases from childhood to adolescence.\n\n\nSTUDY DESIGN\nThe subjects were the 414 participants comprising the Connecticut Longitudinal Study, a sample survey cohort, assessed yearly from 1st to 12th grade on measures of reading and IQ. Statistical analysis employed longitudinal models based on growth curves and multiple groups.\n\n\nRESULTS\nAs early as first grade, compared with typical readers, dyslexic readers had lower reading scores and verbal IQ, and their trajectories over time never converge with those of typical readers. These data demonstrate that such differences are not so much a function of increasing disparities over time but instead because of differences already present in first grade between typical and dyslexic readers.\n\n\nCONCLUSIONS\nThe achievement gap between typical and dyslexic readers is evident as early as first grade, and this gap persists into adolescence. These findings provide strong evidence and impetus for early identification of and intervention for young children at risk for dyslexia. Implementing effective reading programs as early as kindergarten or even preschool offers the potential to close the achievement gap.", "title": "" }, { "docid": "8762106693491e46772c2efade5929dc", "text": "A collection of technologies termed social computing is driving a dramatic evolution of the Web, matching the dot-com era in growth, excitement, and investment. All of these share a high degree of community formation, user level content creation, and a variety of other characteristics. We provide an overview of social computing and identify salient characteristics. We argue that social computing holds tremendous disruptive potential in the business world and can significantly impact society, and outline possible changes in organized human action that could be brought about. Social computing can also have deleterious effects associated with it, including security issues. We suggest that social computing should be a priority for researchers and business leaders and illustrate the fundamental shifts in communication, computing, collaboration, and commerce brought about by this trend.", "title": "" }, { "docid": "87ac799402c785e68db14636b0725523", "text": "One of the challenges of creating applications from confederations of Internet-enabled things is the complexity of having to deal with spontaneously interacting and partially available heterogeneous devices. In this paper we describe the features of the MAGIC Broker 2 (MB2) a platform designed to offer a simple and consistent programming interface for collections of things. We report on the key abstractions offered by the platform and report on its use for developing two IoT applications involving spontaneous device interaction: 1) mobile phones and public displays, and 2) a web-based sensor actuator network portal called Sense Tecnic (STS). We discuss how the MB2 abstractions and implementation have evolved over time to the current design. Finally we present a preliminary performance evaluation and report qualitatively on the developers' experience of using our platform.", "title": "" }, { "docid": "7ea6a5d576e84e15d1da5c2256592fa5", "text": "Context An optimal software development process is regarded as being dependent on the situational characteristics of individual software development settings. Such characteristics include the nature of the application(s) under development, team size, requirements volatility and personnel experience. However, no comprehensive reference framework of the situational factors affecting the software development process is presently available. Objective The absence of such a comprehensive reference framework of the situational factors affecting the software development process is problematic not just because it inhibits our ability to optimise the software development process, but perhaps more importantly, because it potentially undermines our capacity to ascertain the key constraints and characteristics of a software development setting. Method To address this deficiency, we have consolidated a substantial body of related research into an initial reference framework of the situational factors affecting the software development process. To support the data consolidation, we have applied rigorous data coding techniques from Grounded Theory and we believe that the resulting framework represents an important contribution to the software engineering field of knowledge. Results The resulting reference framework of situational factors consists of 8 classifications and 44 factors that inform the software process. We believe that the situational factor reference framework presented herein represents a sound initial reference framework for the key situational elements affecting the software process definition. Conclusion In addition to providing a useful reference listing for the research community and for committees engaged in the development of standards, the reference framework also provides support for practitioners who are challenged with defining and maintaining software development processes. Furthermore, this framework can be used to develop a profile of the situational characteristics of a software development setting, which in turn provides a sound foundation for software development process definition and optimisation.", "title": "" }, { "docid": "1bfab561c8391dad6f0493fa7614feba", "text": "Submission instructions: You should submit your answers via GradeScope and your code via Snap submission site. Submitting answers: Prepare answers to your homework into a single PDF file and submit it via http://gradescope.com. Make sure that answer to each question is on a separate page. This means you should submit a 14-page PDF (1 page for the cover sheet, 4 pages for the answers to question 1, 3 pages for answers to question 2, and 6 pages for question 3). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions We strongly encourage you to use Snap.py for Python. However, you can use any other graph analysis tool or package you want (SNAP for C++, NetworkX for Python, JUNG for Java, etc.). A question that occupied sociologists and economists as early as the 1900's is how do innovations (e.g. ideas, products, technologies, behaviors) diffuse (spread) within a society. One of the prominent researchers in the field is Professor Mark Granovetter who among other contributions introduced along with Thomas Schelling threshold models in sociology. In Granovetter's model, there is a population of individuals (mob) and for simplicity two behaviours (riot or not riot). • Threshold model: each individual i has a threshold t i that determines her behavior in the following way. If there are at least t i individuals that are rioting, then she will join the riot, otherwise she stays inactive. Here, it is implicitly assumed that each individual has full knowledge of the behavior of all other individuals in the group. Nodes with small threshold are called innovators (early adopters) and nodes with large threshold are called laggards (late adopters). Granovetter's threshold model has been successful in explain classical empirical adoption curves by relating them to thresholds in", "title": "" }, { "docid": "3bbf4bd1daaf0f6f916268907410b88f", "text": "UNLABELLED\nNoncarious cervical lesions are highly prevalent and may have different etiologies. Regardless of their origin, be it acid erosion, abrasion, or abfraction, restoring these lesions can pose clinical challenges, including access to the lesion, field control, material placement and handling, marginal finishing, patient discomfort, and chair time. This paper describes a novel technique for minimizing these challenges and optimizing the restoration of noncarious cervical lesions using a technique the author describes as the class V direct-indirect restoration. With this technique, clinicians can create precise extraoral margin finishing and polishing, while maintaining periodontal health and controlling polymerization shrinkage stress.\n\n\nCLINICAL SIGNIFICANCE\nThe clinical technique described in this article has the potential for being used routinely in treating noncarious cervical lesions, especially in cases without easy access and limited field control. Precise margin finishing and polishing is one of the greatest benefits of the class V direct-indirect approach, as the author has seen it work successfully in his practice over the past five years.", "title": "" } ]
scidocsrr
4f2fb8061e59c30496282133ffaab027
An overview of vulnerability assessment and penetration testing techniques
[ { "docid": "34461f38c51a270e2f3b0d8703474dfc", "text": "Software vulnerabilities are the root cause of computer security problem. How people can quickly discover vulnerabilities existing in a certain software has always been the focus of information security field. This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing. Besides, the authors also take vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques. The ending part of the paper analyses the advantages and disadvantages of each technique introduced here and talks about the future direction of this field.", "title": "" } ]
[ { "docid": "45a92ab90fabd875a50229921e99dfac", "text": "This paper describes an empirical study of the problems encountered by 32 blind users on the Web. Task-based user evaluations were undertaken on 16 websites, yielding 1383 instances of user problems. The results showed that only 50.4% of the problems encountered by users were covered by Success Criteria in the Web Content Accessibility Guidelines 2.0 (WCAG 2.0). For user problems that were covered by WCAG 2.0, 16.7% of websites implemented techniques recommended in WCAG 2.0 but the techniques did not solve the problems. These results show that few developers are implementing the current version of WCAG, and even when the guidelines are implemented on websites there is little indication that people with disabilities will encounter fewer problems. The paper closes by discussing the implications of this study for future research and practice. In particular, it discusses the need to move away from a problem-based approach towards a design principle approach for web accessibility.", "title": "" }, { "docid": "ed7832f6fbb1777ab3139cc8b5dd2d28", "text": "Tree ensemble models such as random forests and boosted trees are among the most widely used and practically successful predictive models in applied machine learning and business analytics. Although such models have been used to make predictions based on exogenous, uncontrollable independent variables, they are increasingly being used to make predictions where the independent variables are controllable and are also decision variables. In this paper, we study the problem of tree ensemble optimization: given a tree ensemble that predicts some dependent variable using controllable independent variables, how should we set these variables so as to maximize the predicted value? We formulate the problem as a mixed-integer optimization problem. We theoretically examine the strength of our formulation, provide a hierarchy of approximate formulations with bounds on approximation quality and exploit the structure of the problem to develop two large-scale solution methods, one based on Benders decomposition and one based on iteratively generating tree split constraints. We test our methodology on real data sets, including two case studies in drug design and customized pricing, and show that our methodology can efficiently solve large-scale instances to near or full optimality, and outperforms solutions obtained by heuristic approaches. In our drug design case, we show how our approach can identify compounds that efficiently trade-off predicted performance and novelty with respect to existing, known compounds. In our customized pricing case, we show how our approach can efficiently determine optimal store-level prices under a random forest model that delivers excellent predictive accuracy.", "title": "" }, { "docid": "d89d80791ac8157d054652e5f1292ebb", "text": "The Great Gatsby Curve, the observation that for OECD countries, greater crosssectional income inequality is associated with lower mobility, has become a prominent part of scholarly and policy discussions because of its implications for the relationship between inequality of outcomes and inequality of opportunities. We explore this relationship by focusing on evidence and interpretation of an intertemporal Gatsby Curve for the United States. We consider inequality/mobility relationships that are derived from nonlinearities in the transmission process of income from parents to children and the relationship that is derived from the effects of inequality of socioeconomic segregation, which then affects children. Empirical evidence for the mechanisms we identify is strong. We find modest reduced form evidence and structural evidence of an intertemporal Gatsby Curve for the US as mediated by social influences. Steven N. Durlauf Ananth Seshadri Department of Economics Department of Economics University of Wisconsin University of Wisconsin 1180 Observatory Drive 1180 Observatory Drive Madison WI, 53706 Madison WI, 53706 durlauf@gmail.com aseshadr@ssc.wisc.edu", "title": "" }, { "docid": "2ecd815af00b9961259fa9b2a9185483", "text": "This paper describes the current development status of a mobile robot designed to inspect the outer surface of large oil ship hulls and floating production storage and offloading platforms. These vessels require a detailed inspection program, using several nondestructive testing techniques. A robotic crawler designed to perform such inspections is presented here. Locomotion over the hull is provided through magnetic tracks, and the system is controlled by two networked PCs and a set of custom hardware devices to drive motors, video cameras, ultrasound, inertial platform, and other devices. Navigation algorithm uses an extended-Kalman-filter (EKF) sensor-fusion formulation, integrating odometry and inertial sensors. It was shown that the inertial navigation errors can be decreased by selecting appropriate Q and R matrices in the EKF formulation.", "title": "" }, { "docid": "d7f92d2503d02a76c635c4ab5bce1f1e", "text": "A fundamental feature of learning in animals is the “ability to forget” that allows an organism to perceive, model, and make decisions from disparate streams of information and adapt to changing environments. Against this backdrop, we present a novel unsupervised learning mechanism adaptive synaptic plasticity (ASP) for improved recognition with spiking neural networks (SNNs) for real time online learning in a dynamic environment. We incorporate an adaptive weight decay mechanism with the traditional spike timing dependent plasticity (STDP) learning to model adaptivity in SNNs. The leak rate of the synaptic weights is modulated based on the temporal correlation between the spiking patterns of the pre- and post-synaptic neurons. This mechanism helps in gradual forgetting of insignificant data while retaining significant, yet old, information. ASP, thus, maintains a balance between forgetting and immediate learning to construct a stable-plastic self-adaptive SNN for continuously changing inputs. We demonstrate that the proposed learning methodology addresses catastrophic forgetting, while yielding significantly improved accuracy over the conventional STDP learning method for digit recognition applications. In addition, we observe that the proposed learning model automatically encodes selective attention toward relevant features in the input data, while eliminating the influence of background noise (or denoising) further improving the robustness of the ASP learning.", "title": "" }, { "docid": "82ca6a400bf287dc287df9fa751ddac2", "text": "Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term “information systems”, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that so-called ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an information system, leading to the perspective of ontology-driven information systems.", "title": "" }, { "docid": "f3bed3a3234fd61a168c9653a82b2f04", "text": "Digital libraries such as the NASA Astrophysics Data System (Kurtz et al. 2004) permit the easy accumulation of a new type of bibliometric measure, the number of electronic accesses (\\reads\") of individual articles. We explore various aspects of this new measure. We examine the obsolescence function as measured by actual reads, and show that it can be well t by the sum of four exponentials with very di erent time constants. We compare the obsolescence function as measured by readership with the obsolescence function as measured by citations. We nd that the citation function is proportional to the sum of two of the components of the readership function. This proves that the normative theory of citation is true in the mean. We further examine in detail the similarities and di erences between the citation rate, the readership rate and the total citations for individual articles, and discuss some of the causes. Using the number of reads as a bibliometric measure for individuals, we introduce the read-cite diagram to provide a two-dimensional view of an individual's scienti c productivity. We develop a simple model to account for an individual's reads and cites and use it to show that the position of a person in the read-cite diagram is a function of age, innate productivity, and work history. We show the age biases of both reads and cites, and develop two new bibliometric measures which have substantially less age bias than citations: SumProd, a weighted sum of total citations and the readership rate, intended to show the total productivity of an individual; and Read10, the readership rate for papers published in the last ten years, intended to show an individual's current productivity. We also discuss the e ect of normalization (dividing by the number of authors on a paper) on these statistics. We apply SumProd and Read10 using new, non-parametric techniques to rank and compare di erent astronomical research organizations Subject headings: digital libraries; bibliometrics; sociology of science; information retrieval", "title": "" }, { "docid": "cec97a91937daebec592085319e0f01e", "text": "Key features of the two dominating standards for the unlicensed bands, IEEE 802.11 and Bluetooth Wireless Technology, are combined to obtain a physical layer (PHY) with several desirable features for internet of things (IoT). The proposed PHY, which is referred to as Narrow-band WiFi (NB-WiFi) can be supported by an OFDM transceiver available in an IEEE 802.11 access point (AP). In addition, NB-WiFi supports concurrent use of low data rate IoT application and high data rate broadband using IEEE 802.11ax technology, based on a single IFFT#x002F;FFT in the AP. In the sensor node, Bluetooth Low Energy (BLE) hardware can be reused, making it suitable for dual mode implementation of BLE and NB-WiFi. The performance of the proposed PHY is simulated for an AWGN channel, and it achieves about 10dB improved sensitivity compared to a typical BLE receiver, due to the lower data rate.", "title": "" }, { "docid": "a1757ee58eb48598d3cd6e257b53cd10", "text": "This paper examines the issues of puzzle design in the context of collaborative gaming. The qualitative research approach involves both the conceptual analysis of key terminology and a case study of a collaborative game called eScape. The case study is a design experiment, involving both the process of designing a game environment and an empirical study, where data is collected using multiple methods. The findings and conclusions emerging from the analysis provide insight into the area of multiplayer puzzle design. The analysis and reflections answer questions on how to create meaningful puzzles requiring collaboration and how far game developers can go with collaboration design. The multiplayer puzzle design introduces a new challenge for game designers. Group dynamics, social roles and an increased level of interaction require changes in the traditional conceptual understanding of a single-player puzzle.", "title": "" }, { "docid": "37ac562b07d6d191eabbec94ea344e82", "text": "License plate recognition has been widely studied, and the advance in image capture technology helps enhance or create new methods to achieve this objective. In this work is presented a method for real time detection and segmentation of car license plates based on image analyzing and processing techniques. The results show that the computational cost and accuracy rate considering the proposed approach are acceptable to real time applications, with an execution time under 1 second. The proposed method was validated using two datasets (A and B). It was obtained over 92% detection success for dataset A, 88% in digit segmentation for datasets A and B, and 95% digits classification accuracy rate for dataset B.", "title": "" }, { "docid": "8f4ce2d2ec650a3923d27c3188f30f38", "text": "Synthetic aperture radar (SAR) interferometry is a modern efficient technique that allows reconstructing the height profile of the observed scene. However, apart for the presence of critical nonlinear inversion steps, particularly crucial in abrupt topography scenarios, it does not allow one to separate different scattering mechanisms in the elevation (height) direction within the ground pixel. Overlay of scattering at different elevations in the same azimuth-range resolution cell can be due either to the penetration of the radiation below the surface or to perspective ambiguities caused by the side-looking geometry. Multibaseline three-dimensional (3-D) SAR focusing allows overcoming such a limitation and has thus raised great interest in the recent research. First results with real data have been only obtained in the laboratory and with airborne systems, or with limited time-span and spatial-coverage spaceborne data. This work presents a novel approach for the tomographic processing of European Remote Sensing satellite (ERS) real data for extended scenes and long time span. Besides facing problems common to the airborne case, such as the nonuniformly spaced passes, this processing requires tackling additional difficulties specific to the spaceborne case, in particular a space-varying phase calibration of the data due to atmospheric variations and possible scene deformations occurring for years-long temporal spans. First results are presented that confirm the capability of ERS multipass tomography to resolve multiple targets within the same azimuth-range cell and to map the 3-D scattering properties of the illuminated scene.", "title": "" }, { "docid": "3e0dd3cf428074f21aaf202342003554", "text": "Despite significant recent work, purely unsupervised techniques for part-of-speech (POS) tagging have not achieved useful accuracies required by many language processing tasks. Use of parallel text between resource-rich and resource-poor languages is one source of weak supervision that significantly improves accuracy. However, parallel text is not always available and techniques for using it require multiple complex algorithmic steps. In this paper we show that we can build POS-taggers exceeding state-of-the-art bilingual methods by using simple hidden Markov models and a freely available and naturally growing resource, the Wiktionary. Across eight languages for which we have labeled data to evaluate results, we achieve accuracy that significantly exceeds best unsupervised and parallel text methods. We achieve highest accuracy reported for several languages and show that our approach yields better out-of-domain taggers than those trained using fully supervised Penn Treebank.", "title": "" }, { "docid": "a1774a08ffefd28785fbf3a8f4fc8830", "text": "Bounds are given for the empirical and expected Rademacher complexity of classes of linear transformations from a Hilbert space H to a …nite dimensional space. The results imply generalization guarantees for graph regularization and multi-task subspace learning. 1 Introduction Rademacher averages have been introduced to learning theory as an e¢ cient complexity measure for function classes, motivated by tight, sample or distribution dependent generalization bounds ([10], [2]). Both the de…nition of Rademacher complexity and the generalization bounds extend easily from realvalued function classes to function classes with values in R, as they are relevant to multi-task learning ([1], [12]). There has been an increasing interest in multi-task learning which has shown to be very e¤ective in experiments ([7], [1]), and there have been some general studies of its generalisation performance ([4], [5]). For a large collection of tasks there are usually more data available than for a single task and these data may be put to a coherent use by some constraint of ’relatedness’. A practically interesting case is linear multi-task learning, extending linear large margin classi…ers to vector valued large-margin classi…ers. Di¤erent types of constraints have been proposed: Evgeniou et al ([8], [9]) propose graph regularization, where the vectors de…ning the classi…ers of related tasks have to be near each other. They also show that their scheme can be implemented in the framework of kernel machines. Ando and Zhang [1] on the other hand require the classi…ers to be members of a common low dimensional subspace. They also give generalization bounds using Rademacher complexity, but these bounds increase with the dimension of the input space. This paper gives dimension free bounds which apply to both approaches. 1.1 Multi-task generalization and Rademacher complexity Suppose we have m classi…cation tasks, represented by m independent random variables X ; Y l taking values in X f 1; 1g, where X l models the random", "title": "" }, { "docid": "f84011e3b4c8b1e80d4e79dee3ccad53", "text": "What is the future of fashion? Tackling this question from a data-driven vision perspective, we propose to forecast visual style trends before they occur. We introduce the first approach to predict the future popularity of styles discovered from fashion images in an unsupervised manner. Using these styles as a basis, we train a forecasting model to represent their trends over time. The resulting model can hypothesize new mixtures of styles that will become popular in the future, discover style dynamics (trendy vs. classic), and name the key visual attributes that will dominate tomorrow’s fashion. We demonstrate our idea applied to three datasets encapsulating 80,000 fashion products sold across six years on Amazon. Results indicate that fashion forecasting benefits greatly from visual analysis, much more than textual or meta-data cues surrounding products.", "title": "" }, { "docid": "7b496aac963284f3415ac98b3abd8165", "text": "Forecasting is an important data analysis technique that aims to study historical data in order to explore and predict its future values. In fact, to forecast, different methods have been tested and applied from regression to neural network models. In this research, we proposed Elman Recurrent Neural Network (ERNN) to forecast the Mackey-Glass time series elements. Experimental results show that our scheme outperforms other state-of-art studies.", "title": "" }, { "docid": "456a246b468feb443e0ed576173d6d46", "text": "Automatic person re-identification (re-id) across camera boundaries is a challenging problem. Approaches have to be robust against many factors which influence the visual appearance of a person but are not relevant to the person's identity. Examples for such factors are pose, camera angles, and lighting conditions. Person attributes are a semantic high level information which is invariant across many such influences and contain information which is often highly relevant to a person's identity. In this work we develop a re-id approach which leverages the information contained in automatically detected attributes. We train an attribute classifier on separate data and include its responses into the training process of our person re-id model which is based on convolutional neural networks (CNNs). This allows us to learn a person representation which contains information complementary to that contained within the attributes. Our approach is able to identify attributes which perform most reliably for re-id and focus on them accordingly. We demonstrate the performance improvement gained through use of the attribute information on multiple large-scale datasets and report insights into which attributes are most relevant for person re-id.", "title": "" }, { "docid": "6975d0200669923b414f1775c208b91b", "text": "Wireless sensor networks (WSNs) have attracted a lot of interest over the last decade in wireless and mobile computing research community. Applications of WSNs are numerous and growing, which range from indoor deployment scenarios in the home and office to outdoor deployment in adversary’s territory in a tactical battleground. However, due to distributed nature and their deployment in remote areas, these networks are vulnerable to numerous security threats that can adversely affect their performance. This problem is more critical if the network is deployed for some mission-critical applications such as in a tactical battlefield. Random failure of nodes is also very likely in real-life deployment scenarios. Due to resource constraints in the sensor nodes, traditional security mechanisms with large overhead of computation and communication are infeasible in WSNs. Design and implementation of secure WSNs is, therefore, a particularly challenging task. This chapter provides a comprehensive discussion on the state of the art in security technologies for WSNs. It identifies various possible attacks at different layers of the communication protocol stack in a typical WSN and presents their possible countermeasures. A brief discussion on the future direction of research in WSN security is also included.", "title": "" }, { "docid": "f2579b9d625018867f4c1738d046ec7a", "text": "Carpenter syndrome, a rare autosomal recessive disorder characterized by a combination of craniosynostosis, polysyndactyly, obesity, and other congenital malformations, is caused by mutations in RAB23, encoding a member of the Rab-family of small GTPases. In 15 out of 16 families previously reported, the disease was caused by homozygosity for truncating mutations, and currently only a single missense mutation has been identified in a compound heterozygote. Here, we describe a further 8 independent families comprising 10 affected individuals with Carpenter syndrome, who were positive for mutations in RAB23. We report the first homozygous missense mutation and in-frame deletion, highlighting key residues for RAB23 function, as well as the first splice-site mutation. Multi-suture craniosynostosis and polysyndactyly have been present in all patients described to date, and abnormal external genitalia have been universal in boys. High birth weight was not evident in the current group of patients, but further evidence for laterality defects is reported. No genotype-phenotype correlations are apparent. We provide experimental evidence that transcripts encoding truncating mutations are subject to nonsense-mediated decay, and that this plays an important role in the pathogenesis of many RAB23 mutations. These observations refine the phenotypic spectrum of Carpenter syndrome and offer new insights into molecular pathogenesis.", "title": "" }, { "docid": "6194a43f6c355c921e5dee3e3a368696", "text": "Inverse reinforcement learning (IRL) is the problem of inferring the underlying reward function from the expert's behavior data. The difficulty in IRL mainly arises in choosing the best reward function since there are typically an infinite number of reward functions that yield the given behavior data as optimal. Another difficulty comes from the noisy behavior data due to sub-optimal experts. We propose a hierarchical Bayesian framework, which subsumes most of the previous IRL algorithms as well as models the sub-optimality of the expert's behavior. Using a number of experiments on a synthetic problem, we demonstrate the effectiveness of our approach including the robustness of our hierarchical Bayesian framework to the sub-optimal expert behavior data. Using a real dataset from taxi GPS traces, we additionally show that our approach predicts the driving behavior with a high accuracy.", "title": "" }, { "docid": "536e45f7130aa40625e3119523d2e1de", "text": "We consider the problem of Simultaneous Localization and Mapping (SLAM) from a Bayesian point of view using the Rao-Blackwellised Particle Filter (RBPF). We focus on the class of indoor mobile robots equipped with only a stereo vision sensor. Our goal is to construct dense metric maps of natural 3D point landmarks for large cyclic environments in the absence of accurate landmark position measurements and reliable motion estimates. Landmark estimates are derived from stereo vision and motion estimates are based on visual odometry. We distinguish between landmarks using the Scale Invariant Feature Transform (SIFT). Our work defers from current popular approaches that rely on reliable motion models derived from odometric hardware and accurate landmark measurements obtained with laser sensors. We present results that show that our model is a successful approach for vision-based SLAM, even in large environments. We validate our approach experimentally, producing the largest and most accurate vision-based map to date, while we identify the areas where future research should focus in order to further increase its accuracy and scalability to significantly larger", "title": "" } ]
scidocsrr
b9ac9cd12227382c346aad97c58efb84
Join-Graph Propagation Algorithms
[ { "docid": "14a15a7fb3964aad438191737a0dacb9", "text": "Yair Weiss Computer Science Division UC Berkeley, 485 Soda Hall Berkeley, CA 94720-1776 Phone: 510-642-5029 yweiss@cs.berkeley.edu Belief propagation (BP) was only supposed to work for tree-like networks but works surprisingly well in many applications involving networks with loops, including turbo codes. However, there has been little understanding of the algorithm or the nature of the solutions it finds for general graphs. We show that BP can only converge to a stationary point of an approximate free energy, known as the Bethe free energy in statistical physics. This result characterizes BP fixed-points and makes connections with variational approaches to approximate inference. More importantly, our analysis lets us build on the progress made in statistical physics since Bethe's approximation was introduced in 1935. Kikuchi and others have shown how to construct more accurate free energy approximations, of which Bethe's approximation is the simplest. Exploiting the insights from our analysis, we derive generalized belief propagation (GBP) versions ofthese Kikuchi approximations. These new message passing algorithms can be significantly more accurate than ordinary BP, at an adjustable increase in complexity. We illustrate such a new GBP algorithm on a grid Markov network and show that it gives much more accurate marginal probabilities than those found using ordinary BP.", "title": "" } ]
[ { "docid": "ce5efa83002cee32a5ef8b8b73b81a60", "text": "Registering a 3D facial model to a 2D image under occlusion is difficult. First, not all of the detected facial landmarks are accurate under occlusions. Second, the number of reliable landmarks may not be enough to constrain the problem. We propose a method to synthesize additional points (Sensible Points) to create pose hypotheses. The visual clues extracted from the fiducial points, non-fiducial points, and facial contour are jointly employed to verify the hypotheses. We define a reward function to measure whether the projected dense 3D model is well-aligned with the confidence maps generated by two fully convolutional networks, and use the function to train recurrent policy networks to move the Sensible Points. The same reward function is employed in testing to select the best hypothesis from a candidate pool of hypotheses. Experimentation demonstrates that the proposed approach is very promising in solving the facial model registration problem under occlusion.", "title": "" }, { "docid": "d3f7765e3c0c1b7dce03475b74336670", "text": "The Interpretation of Dreams contains Freud's first and most complete articulation of the primary and secondary mental processes that serve as a framework for the workings of mind, conscious and unconscious. While it is generally believed that Freud proposed a single theory of dreaming, based on the primary process, a number of ambiguities, inconsistencies, and contradictions reflect an incomplete differentiation of the parts played by the two mental processes in dreaming. It is proposed that two radically different hypotheses about dreaming are embedded in Freud's work. The one implicit in classical dream interpretation is based on the assumption that dreams, like waking language, are representational, and are made up of symbols connected to latent unconscious thoughts. Whereas the symbols that constitute waking language are largely verbal and only partly unconscious, those that constitute dreams are presumably more thoroughly disguised and represented as arcane hallucinated hieroglyphs. From this perspective, both the language of the dream and that of waking life are secondary process manifestations. Interpretation of the dream using the secondary process model involves the assumption of a linear two-way \"road\" connecting manifest and latent aspects, which in one direction involves the work of dream construction and in the other permits the associative process of decoding and interpretation. Freud's more revolutionary hypothesis, whose implications he did not fully elaborate, is that dreams are the expression of a primary mental process that differs qualitatively from waking thought and hence are incomprehensible through a secondary process model. This seems more adequately to account for what is now known about dreaming, and is more consistent with the way dream interpretation is ordinarily conducted in clinical practice. Recognition that dreams are qualitatively distinctive expressions of mind may help to restore dreaming to its privileged position as a unique source of mental status information.", "title": "" }, { "docid": "4ea7482524661175e8268c15eb22a6ae", "text": "We present a fully unsupervised, extractive text summarization system that leverages a submodularity framework introduced by past research. The framework allows summaries to be generated in a greedy way while preserving near-optimal performance guarantees. Our main contribution is the novel coverage reward term of the objective function optimized by the greedy algorithm. This component builds on the graph-of-words representation of text and the k-core decomposition algorithm to assign meaningful scores to words. We evaluate our approach on the AMI and ICSI meeting speech corpora, and on the DUC2001 news corpus. We reach state-of-the-art performance on all datasets. Results indicate that our method is particularly well-suited to the meeting domain.", "title": "" }, { "docid": "8d4007b4d769c2d90ae07b5fdaee8688", "text": "In this project, we implement the semi-supervised Recursive Autoencoders (RAE), and achieve the result comparable with result in [1] on the Movie Review Polarity dataset1. We achieve 76.08% accuracy, which is slightly lower than [1] ’s result 76.8%, with less vector length. Experiments show that the model can learn sentiment and build reasonable structure from sentence.We find longer word vector and adjustment of words’ meaning vector is beneficial, while normalization of transfer function brings some improvement. We also find normalization of the input word vector may be beneficial for training.", "title": "" }, { "docid": "cb67ffc6559d42628022994961179208", "text": "Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans.", "title": "" }, { "docid": "4f84d3a504cf7b004a414346bb19fa94", "text": "Abstract—The electric power supplied by a photovoltaic power generation systems depends on the solar irradiation and temperature. The PV system can supply the maximum power to the load at a particular operating point which is generally called as maximum power point (MPP), at which the entire PV system operates with maximum efficiency and produces its maximum power. Hence, a Maximum power point tracking (MPPT) methods are used to maximize the PV array output power by tracking continuously the maximum power point. The proposed MPPT controller is designed for 10kW solar PV system installed at Cape Institute of Technology. This paper presents the fuzzy logic based MPPT algorithm. However, instead of one type of membership function, different structures of fuzzy membership functions are used in the FLC design. The proposed controller is combined with the system and the results are obtained for each membership functions in Matlab/Simulink environment. Simulation results are decided that which membership function is more suitable for this system.", "title": "" }, { "docid": "3f807cb7e753ebd70558a0ce74b416b7", "text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "62d7490c530808eb7158f601292a55a1", "text": "Together with an explosive growth of the mobile applications and emerging of cloud computing concept, mobile cloud computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud computing into the mobile environment and overcomes obstacles related to the performance (e.g., battery life, storage, and bandwidth), environment (e.g., heterogeneity, scalability, and availability), and security (e.g., reliability and privacy) discussed in mobile computing. This paper gives a survey of MCC, which helps general readers have an overview of the MCC including the definition, architecture, and applications. The issues, existing solutions, and approaches are presented. In addition, the future research directions of MCC are discussed. Copyright © 2011 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "7486f1152699fbe639ff9427bfc202f3", "text": "This paper aims to study the recent work regarding the maximum power point tracker (MPPT) based on a sliding-mode manner for a PV array. Such a MPPT accompanies a perturbation and observation algorithm as well as a boost converter. Moreover, it needs a straight sliding-line and some adjustments on its location until a PV characteristic curve and this sliding line cross each other at the maximum power point (MPP). The switching pattern of the boost converter is influenced by this straight line so that the PV operating point is compelled to move along an instant PV curve and onto the sliding line. Thereby the PV array will generate the maximum power. An easier explanation for convergence towards the sliding line is provided. Through the boost converter and the single-phase inverter, power delivery from a PV array to an AC grid under the above MPPT will be also considered.", "title": "" }, { "docid": "2ebe6832af61085200d4aef27f2be3a5", "text": "This paper deals with the development and the parameter identification of an anaerobic digestion process model. A two-step (acidogenesis-methanization) mass-balance model has been considered. The model incorporates electrochemical equilibria in order to include the alkalinity, which has to play a central role in the related monitoring and control strategy of a treatment plant. The identification is based on a set of dynamical experiments designed to cover a wide spectrum of operating conditions that are likely to take place in the practical operation of the plant. A step by step identification procedure to estimate the model parameters is presented. The results of 70 days of experiments in a 1-m(3) fermenter are then used to validate the model.", "title": "" }, { "docid": "95ae85733d7c95912d7cd92b105d4e66", "text": "The reengineering of legacy code is a tedious endeavor. Automatic transformation of legacy code from an old technology to a new one preserves potential problems in legacy code with respect to obsolete, changed, and new business cases. On the other hand, manual analysis of legacy code without assistance of original developers is time consuming and error-prone. For the purpose of reengineering PL/SQL legacy code in the steel making domain, we developed tool support for the reverse engineering of PL/SQL code into a more abstract and comprehensive representation. This representation then serves as input for stakeholders to manually analyze legacy code, to identify obsolete and missing business cases, and, finally, to support the re-implementation of a new system. In this paper we briefly introduce the tool and present results of reverse engineering PL/SQL legacy code in the steel making domain. We show how stakeholders are supported in analyzing legacy code by means of general-purpose analysis techniques combined with domain-specific representations and conclude with some of the lessons learned.", "title": "" }, { "docid": "0dac38edf20c2a89a9eb46cd1300162c", "text": "Common software weaknesses, such as improper input validation, integer overflow, can harm system security directly or indirectly, causing adverse effects such as denial-of-service, execution of unauthorized code. Common Weakness Enumeration (CWE) maintains a standard list and classification of common software weakness. Although CWE contains rich information about software weaknesses, including textual descriptions, common sequences and relations between software weaknesses, the current data representation, i.e., hyperlined documents, does not support advanced reasoning tasks on software weaknesses, such as prediction of missing relations and common consequences of CWEs. Such reasoning tasks become critical to managing and analyzing large numbers of common software weaknesses and their relations. In this paper, we propose to represent common software weaknesses and their relations as a knowledge graph, and develop a translation-based, description-embodied knowledge representation learning method to embed both software weaknesses and their relations in the knowledge graph into a semantic vector space. The vector representations (i.e., embeddings) of software weaknesses and their relations can be exploited for knowledge acquisition and inference. We conduct extensive experiments to evaluate the performance of software weakness and relation embeddings in three reasoning tasks, including CWE link prediction, CWE triple classification, and common consequence prediction. Our knowledge graph embedding approach outperforms other description- and/or structure-based representation learning methods.", "title": "" }, { "docid": "965472260a2ab6762c8d846040171cfe", "text": "With growing computing power, physical simulations have become increasingly important in computer graphics. Content creation for movies and interactive computer games relies heavily on physical models, and physicallyinspired interactions have proven to be a great metaphor for shape modeling. This tutorial will acquaint the reader with meshless methods for simulation and modeling. These methods differ from the more common grid or mesh-based methods in that they require less constraints on the spatial discretization. Since the algorithmic structure of simulation algorithms so critically depends on the underlying discretization, we will first treat methods for function approximation from discrete, irregular samples: smoothed particle hydrodynamics and moving least squares. This discussion will include numerical properties as well as complexity considerations. In the second part of this tutorial, we will then treat a number of applications for these approximation schemes. The smoothed particle hydrodynamics framework is used in fluid dynamics and has proven particularly popular in real-time applications. Moving least squares approximations provide higher order consistency, and are therefore suited for the simulation of elastic solids. We will cover both basic elasticity and applications in modeling.", "title": "" }, { "docid": "abf7ee5b09e679bfaabefc49cb45371a", "text": "The work to be performed on open source systems, whether feature developments or defects, is typically described as an issue (or bug). Developers self-select bugs from the many open bugs in a repository when they wish to perform work on the system. This paper evaluates a recommender, called NextBug, that considers the textual similarity of bug descriptions to predict bugs that require handling of similar code fragments. First, we evaluate this recommender using 69 projects in the Mozilla ecosystem. We show that for detecting similar bugs, a technique that considers just the bug components and short descriptions perform just as well as a more complex technique that considers other features. Second, we report a field study where we monitored the bugs fixed for Mozilla during a week. We sent mails to the developers who fixed these bugs, asking whether they would consider working on the recommendations provided by NextBug, 39 developers (59%) stated that they would consider working on these recommendations, 44 developers (67%) also expressed interest in seeing the recommendations in their bug tracking system.", "title": "" }, { "docid": "935c1dc7c60c6179dd5c854cb92526e6", "text": "BACKGROUND\nAlthough surgical site infections (SSIs) are known to be associated with increased length of stay (LOS) and additional cost, their impact on the profitability of surgical procedures is unknown.\n\n\nAIM\nTo determine the clinical and economic burden of SSI over a two-year period and to predict the financial consequences of their elimination.\n\n\nMETHODS\nSSI surveillance and Patient Level Information and Costing System (PLICS) datasets for patients who underwent major surgical procedures at Plymouth Hospitals NHS Trust between April 2010 and March 2012 were consolidated. The main outcome measures were the attributable postoperative length of stay (LOS), cost, and impact on the margin differential (profitability) of SSI. A secondary outcome was the predicted financial consequence of eliminating all SSIs.\n\n\nFINDINGS\nThe median additional LOS attributable to SSI was 10 days [95% confidence interval (CI): 7-13 days] and a total of 4694 bed-days were lost over the two-year period. The median additional cost attributable to SSI was £5,239 (95% CI: 4,622-6,719) and the aggregate extra cost over the study period was £2,491,424. After calculating the opportunity cost of eliminating all SSIs that had occurred in the two-year period, the combined overall predicted financial benefit of doing so would have been only £694,007. For seven surgical categories, the hospital would have been financially worse off if it had successfully eliminated all SSIs.\n\n\nCONCLUSION\nSSI causes significant clinical and economic burden. Nevertheless the current system of reimbursement provided a financial disincentive to their reduction.", "title": "" }, { "docid": "cd59460d293aa7ecbb9d7b96ed451b9a", "text": "PURPOSE\nThe prevalence of work-related upper extremity musculoskeletal disorders and visual symptoms reported in the USA has increased dramatically during the past two decades. This study examined the factors of computer use, workspace design, psychosocial factors, and organizational ergonomics resources on musculoskeletal and visual discomfort and their impact on the safety and health of computer work employees.\n\n\nMETHODS\nA large-scale, cross-sectional survey was administered to a US manufacturing company to investigate these relationships (n = 1259). Associations between these study variables were tested along with moderating effects framed within a conceptual model.\n\n\nRESULTS\nSignificant relationships were found between computer use and psychosocial factors of co-worker support and supervisory relations with visual and musculoskeletal discomfort. Co-worker support was found to be significantly related to reports of eyestrain, headaches, and musculoskeletal discomfort. Supervisor relations partially moderated the relationship between workspace design satisfaction and visual and musculoskeletal discomfort.\n\n\nCONCLUSION\nThis study provides guidance for developing systematic, preventive measures and recommendations in designing office ergonomics interventions with the goal of reducing musculoskeletal and visual discomfort while enhancing office and computer workers' performance and safety.", "title": "" }, { "docid": "f4503626420d2f17e0716312a7c325ad", "text": "Segmentation of left ventricular (LV) endocardium from 3D echocardiography is important for clinical diagnosis because it not only can provide some clinical indices (e.g. ventricular volume and ejection fraction) but also can be used for the analysis of anatomic structure of ventricle. In this work, we proposed a new full-automatic method, combining the deep learning and deformable model, for the segmentation of LV endocardium. We trained convolutional neural networks to generate a binary cuboid to locate the region of interest (ROI). And then, using ROI as the input, we trained stacked autoencoder to infer the LV initial shape. At last, we adopted snake model initiated by inferred shape to segment the LV endocardium. In the experiments, we used 3DE data, from CETUS challenge 2014 for training and testing by segmentation accuracy and clinical indices. The results demonstrated the proposed method is accuracy and efficiency respect to expert's measurements.", "title": "" }, { "docid": "97691304930a85066a15086877473857", "text": "In the context of modern cryptosystems, a common theme is the creation of distributed trust networks. In most of these designs, permanent storage of a contract is required. However, permanent storage can become a major performance and cost bottleneck. As a result, good code compression schemes are a key factor in scaling these contract based cryptosystems. For this project, we formalize and implement a data structure called the Merkelized Abstract Syntax Tree (MAST) to address both data integrity and compression. MASTs can be used to compactly represent contractual programs that will be executed remotely, and by using some of the properties of Merkle trees, they can also be used to verify the integrity of the code being executed. A concept by the same name has been discussed in the Bitcoin community for a while, the terminology originates from the work of Russel O’Connor and Pieter Wuille, however this discussion was limited to private correspondences. We present a formalization of it and provide an implementation.The project idea was developed with Bitcoin applications in mind, and the experiment we set up uses MASTs in a crypto currency network simulator. Using MASTs in the Bitcoin protocol [2] would increase the complexity (length) of contracts permitted on the network, while simultaneously maintaining the security of broadcasted data. Additionally, contracts may contain privileged, secret branches of execution.", "title": "" }, { "docid": "57bebb90000790a1d76a400f69d5736d", "text": "In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.", "title": "" }, { "docid": "fee50f8ab87f2b97b83ca4ef92f57410", "text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.", "title": "" } ]
scidocsrr
6c67ca5d25aa97036c67dc21236d71b6
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
[ { "docid": "e2134be71cdf4619a046128321efe177", "text": "This paper describes the KeLP system participating in the SemEval-2016 Community Question Answering (cQA) task. The challenge tasks are modeled as binary classification problems: kernel-based classifiers are trained on the SemEval datasets and their scores are used to sort the instances and produce the final ranking. All classifiers and kernels have been implemented within the Kernel-based Learning Platform called KeLP. Our primary submission ranked first in Subtask A, third in Subtask B and second in Subtask C. These ranks are based on MAP, which is the referring challenge system score. Our approach outperforms all the other systems with respect to all the other challenge metrics.", "title": "" }, { "docid": "60ea2144687d867bb4f6b21e792a8441", "text": "Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.", "title": "" }, { "docid": "dc8af875967521aa7254da94a762b6f7", "text": "We describe a new deep learning architecture for learning to rank question answer pairs. Our approach extends the long short-term memory (LSTM) network with holographic composition to model the relationship between question and answer representations. As opposed to the neural tensor layer that has been adopted recently, the holographic composition provides the benefits of scalable and rich representational learning approach without incurring huge parameter costs. Overall, we present Holographic Dual LSTM (HD-LSTM), a unified architecture for both deep sentence modeling and semantic matching. Essentially, our model is trained end-to-end whereby the parameters of the LSTM are optimized in a way that best explains the correlation between question and answer representations. In addition, our proposed deep learning architecture requires no extensive feature engineering. Via extensive experiments, we show that HD-LSTM outperforms many other neural architectures on two popular benchmark QA datasets. Empirical studies confirm the effectiveness of holographic composition over the neural tensor layer.", "title": "" }, { "docid": "87e315548e67f8de46ad0cb3db8b7aaa", "text": "We study answer selection for question answering, in which given a question and a set of candidate answer sentences, the goal is to identify the subset that contains the answer. Unlike previous work which treats this task as a straightforward pointwise classification problem, we model this problem as a ranking task and propose a pairwise ranking approach that can directly exploit existing pointwise neural network models as base components. We extend the Noise-Contrastive Estimation approach with a triplet ranking loss function to exploit interactions in triplet inputs over the question paired with positive and negative examples. Experiments on TrecQA and WikiQA datasets show that our approach achieves state-of-the-art effectiveness without the need for external knowledge sources or feature engineering.", "title": "" } ]
[ { "docid": "2132600ccd10cbf2c664cf42c68bc38c", "text": "We stabilize the activations of Recurrent Neural Networks (RNNs) by penalizing the squared distance between successive hidden states norms. This penalty term is an effective regularizer for RNNs including LSTMs and IRNNs, improving performance on character-level language modelling and phoneme recognition, and outperforming weight noise. With this penalty term, IRNN can achieve similar performance to LSTM on language modelling, although adding the penalty term to the LSTM results in superior performance. Our penalty term also prevents the exponential growth of IRNNs activations outside of their training horizon, allowing them to generalize to much longer sequences.", "title": "" }, { "docid": "476eccd2e0592256a5726a27be9feceb", "text": "Visual attention, which assigns weights to image regions according to their relevance to a question, is considered as an indispensable part by most Visual Question Answering models. Although the questions may involve complex rela- tions among multiple regions, few attention models can ef- fectively encode such cross-region relations. In this paper, we demonstrate the importance of encoding such relations by showing the limited effective receptive field of ResNet on two datasets, and propose to model the visual attention as a multivariate distribution over a grid-structured Con- ditional Random Field on image regions. We demonstrate how to convert the iterative inference algorithms, Mean Field and Loopy Belief Propagation, as recurrent layers of an end-to-end neural network. We empirically evalu- ated our model on 3 datasets, in which it surpasses the best baseline model of the newly released CLEVR dataset [13] by 9.5%, and the best published model on the VQA dataset [3] by 1.25%. Source code is available at https://github.com/zhuchen03/vqa-sva.", "title": "" }, { "docid": "ef8ba8ae9696333f5da066813a4b79d7", "text": "Neural image/video captioning models can generate accurate descriptions, but their internal process of mapping regions to words is a black box and therefore difficult to explain. Top-down neural saliency methods can find important regions given a high-level semantic task such as object classification, but cannot use a natural language sentence as the top-down input for the task. In this paper, we propose Caption-Guided Visual Saliency to expose the region-to-word mapping in modern encoder-decoder networks and demonstrate that it is learned implicitly from caption training data, without any pixel-level annotations. Our approach can produce spatial or spatiotemporal heatmaps for both predicted captions, and for arbitrary query sentences. It recovers saliency without the overhead of introducing explicit attention layers, and can be used to analyze a variety of existing model architectures and improve their design. Evaluation on large-scale video and image datasets demonstrates that our approach achieves comparable captioning performance with existing methods while providing more accurate saliency heatmaps. Our code is available at visionlearninggroup.github.io/caption-guided-saliency/.", "title": "" }, { "docid": "aba4e6baa69a2ca7d029ebc33931fd4d", "text": "Along with the improvement of radar technologies Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) and Inverse SAR (ISAR) has come to be an active research area. SAR/ISAR are radar techniques to generate a two-dimensional high-resolution image of a target. Unlike other similar experiments using Convolutional Neural Networks (CNN) to solve this problem, we utilize an unusual approach that leads to better performance and faster training times. Our CNN uses complex values generated by a simulation to train the network; additionally, we utilize a multi-radar approach to increase the accuracy of the training and testing processes, thus resulting in higher accuracies than the other papers working on SAR/ISAR ATR. We generated our dataset with 7 different aircraft models with a radar simulator we developed called RadarPixel; it is a Windows GUI program implemented using Matlab and Java programing, the simulator is capable of accurately replicating a real SAR/ISAR configurations. Our objective is utilize our multiradar technique and determine the optimal number of radars needed to detect and classify targets.", "title": "" }, { "docid": "a4969e82e3cccf5c9ca7177d4ca5007c", "text": "Traditional views of automaticity are in need of revision. For example, automaticity often has been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirical data suggest that automatic processes are continuous, and furthermore are subject to attentional control. A model of attention is presented to address these issues. Within a parallel distributed processing framework, it is proposed that the attributes of automaticity depend on the strength of a processing pathway and that strength increases with training. With the Stroop effect as an example, automatic processes are shown to be continuous and to emerge gradually with practice. Specifically, a computational model of the Stroop task simulates the time course of processing as well as the effects of learning. This was accomplished by combining the cascade mechanism described by McClelland (1979) with the backpropagation learning algorithm (Rumelhart, Hinton, & Williams, 1986). The model can simulate performance in the standard Stroop task, as well as aspects of performance in variants of this task that manipulate stimulus-onset asynchrony, response set, and degree of practice. The model presented is contrasted against other models, and its relation to many of the central issues in the literature on attention, automaticity, and interference is discussed.", "title": "" }, { "docid": "92c5f9d8f33f00dc0ced4b2fa57916f3", "text": "Blockchain holds promise for being the revolutionary technology, which has the potential to find applications in numerous fields such as digital money, clearing, gambling and product tracing. However, blockchain faces its own problems and challenges. One key problem is to automatically cluster the behavior patterns of all the blockchain nodes into categories. In this paper, we introduce the problem of behavior pattern clustering in blockchain networks and propose a novel algorithm termed BPC for this problem. We evaluate a long list of potential sequence similarity measures, and select a distance that is suitable for the behavior pattern clustering problem. Extensive experiments show that our proposed algorithm is much more effective than the existing methods in terms of clustering accuracy.", "title": "" }, { "docid": "28d8be0cd581a9696c533b457ceb6628", "text": "Nowadays, people usually participate in multiple social networks simultaneously, e.g., Facebook and Twitter. Formally, the correspondences of the accounts that belong to the same user are defined as anchor links, and the networks aligned by anchor links can be denoted as aligned networks. In this paper, we study the problem of anchor link prediction (ALP) across a pair of aligned networks based on social network structure. First, three similarity metrics (CPS, CCS, and CPS+) are proposed. Different from the previous works, we focus on the theoretical guarantees of our metrics. We prove mathematically that the node pair with the maximum CPS or CPS+ should be an anchor link with high probability and a correctly predicted anchor link must have a high value of CCS. Second, using the CPS+ and CCS, we present a two-stage iterative algorithm CPCC to solve the problem of the ALP. More specifically, we present an early termination strategy to make a tradeoff between precision and recall. At last, a series of experiments are conducted on both synthetic and real-world social networks to demonstrate the effectiveness of the CPCC.", "title": "" }, { "docid": "8f47cd3066eefb2a4ceb279ba884a8a9", "text": "BACKGROUND\nEndothelin (ET)-1 is a potent vasoconstrictor that contributes to vascular remodeling in hypertension and other cardiovascular diseases. Endogenous ET-1 is produced predominantly by vascular endothelial cells. To directly test the role of endothelium-derived ET-1 in cardiovascular pathophysiology, we specifically targeted expression of the human preproET-1 gene to the endothelium by using the Tie-2 promoter in C57BL/6 mice.\n\n\nMETHODS AND RESULTS\nTen-week-old male C57BL/6 transgenic (TG) and nontransgenic (wild type; WT) littermates were studied. TG mice exhibited 3-fold higher vascular tissue ET-1 mRNA and 7-fold higher ET-1 plasma levels than did WT mice but no significant elevation in blood pressure. Despite the absence of significant blood pressure elevation, TG mice exhibited marked hypertrophic remodeling and oxidant excess-dependent endothelial dysfunction of resistance vessels, altered ET-1 and ET-3 vascular responses, and significant increases in ET(B) expression compared with WT littermates. Moreover, TG mice generated significantly higher oxidative stress, possibly through increased activity and expression of vascular NAD(P)H oxidase than did their WT counterparts.\n\n\nCONCLUSIONS\nIn this new murine model of endothelium-restricted human preproET-1 overexpression, ET-1 caused structural remodeling and endothelial dysfunction of resistance vessels, consistent with a direct nonhemodynamic effect of ET-1 on the vasculature, at least in part through the activation of vascular NAD(P)H oxidase.", "title": "" }, { "docid": "508ad7d072a62433f3233d90286ef902", "text": "The NP-hard Colorful Components problem is, given a vertex-colored graph, to delete a minimum number of edges such that no connected component contains two vertices of the same color. It has applications in multiple sequence alignment and in multiple network alignment where the colors correspond to species. We initiate a systematic complexity-theoretic study of Colorful Components by presenting NP-hardness as well as fixed-parameter tractability results for different variants of Colorful Components. We also perform experiments with our algorithms and additionally develop an efficient and very accurate heuristic algorithm clearly outperforming a previous min-cut-based heuristic on multiple sequence alignment data.", "title": "" }, { "docid": "dd9edd37ff5f4cb332fcb8a0ef86323e", "text": "This paper proposes several nonlinear control strategies for trajectory tracking of a quadcopter system based on the property of differential flatness. Its originality is twofold. Firstly, it provides a flat output for the quadcopter dynamics capable of creating full flat parametrization of the states and inputs. Moreover, B-splines characterizations of the flat output and their properties allow for optimal trajectory generation subject to way-point constraints. Secondly, several control strategies based on computed torque control and feedback linearization are presented and compared. The advantages of flatness within each control strategy are analyzed and detailed through extensive simulation results.", "title": "" }, { "docid": "5594fc8fec483698265abfe41b3776c9", "text": "This paper is an abridgement and update of numerous IEEE papers dealing with Squirrel Cage Induction Motor failure analysis. They are the result of a taxonomic study and research conducted by the author during a 40 year career in the motor industry. As the Petrochemical Industry is revolving to reliability based maintenance, increased attention should be given to preventing repeated failures. The Root Cause Failure methodology presented in this paper will assist in this transition. The scope of the product includes Squirrel Cage Induction Motors up to 3000 hp, however, much of this methodology has application to larger sizes and types.", "title": "" }, { "docid": "4ab3db4b0c338dbe8d5bb9e1f49f2a5c", "text": "BACKGROUND\nSub-Saharan African (SSA) countries are currently experiencing one of the most rapid epidemiological transitions characterized by increasing urbanization and changing lifestyle factors. This has resulted in an increase in the incidence of non-communicable diseases, especially cardiovascular disease (CVD). This double burden of communicable and chronic non-communicable diseases has long-term public health impact as it undermines healthcare systems.\n\n\nPURPOSE\nThe purpose of this paper is to explore the socio-cultural context of CVD risk prevention and treatment in sub-Saharan Africa. We discuss risk factors specific to the SSA context, including poverty, urbanization, developing healthcare systems, traditional healing, lifestyle and socio-cultural factors.\n\n\nMETHODOLOGY\nWe conducted a search on African Journals On-Line, Medline, PubMed, and PsycINFO databases using combinations of the key country/geographic terms, disease and risk factor specific terms such as \"diabetes and Congo\" and \"hypertension and Nigeria\". Research articles on clinical trials were excluded from this overview. Contrarily, articles that reported prevalence and incidence data on CVD risk and/or articles that report on CVD risk-related beliefs and behaviors were included. Both qualitative and quantitative articles were included.\n\n\nRESULTS\nThe epidemic of CVD in SSA is driven by multiple factors working collectively. Lifestyle factors such as diet, exercise and smoking contribute to the increasing rates of CVD in SSA. Some lifestyle factors are considered gendered in that some are salient for women and others for men. For instance, obesity is a predominant risk factor for women compared to men, but smoking still remains mostly a risk factor for men. Additionally, structural and system level issues such as lack of infrastructure for healthcare, urbanization, poverty and lack of government programs also drive this epidemic and hampers proper prevention, surveillance and treatment efforts.\n\n\nCONCLUSION\nUsing an African-centered cultural framework, the PEN3 model, we explore future directions and efforts to address the epidemic of CVD risk in SSA.", "title": "" }, { "docid": "0ad4432a79ea6b3eefbe940adf55ff7b", "text": "This study reviews the long-term outcome of prostheses and fixtures (implants) in 759 totally edentulous jaws of 700 patients. A total of 4,636 standard fixtures were placed and followed according to the osseointegration method for a maximum of 24 years by the original team at the University of Göteborg. Standardized annual clinical and radiographic examinations were conducted as far as possible. A lifetable approach was applied for statistical analysis. Sufficient numbers of fixtures and prostheses for a detailed statistical analysis were present for observation times up to 15 years. More than 95% of maxillae had continuous prosthesis stability at 5 and 10 years, and at least 92% at 15 years. The figure for mandibles was 99% at all time intervals. Calculated from the time of fixture placement, the estimated survival rates for individual fixtures in the maxilla were 84%, 89%, and 92% at 5 years; 81% and 82% at 10 years; and 78% at 15 years. In the mandible they were 91%, 98%, and 99% at 5 years; 89% and 98% at 10 years; and 86% at 15 years. (The different percentages at 5 and 10 years refer to results for different routine groups of fixtures with 5 to 10, 10 to 15, and 1 to 5 years of observation time, respectively.) The results of this study concur with multicenter and earlier results for the osseointegration method.", "title": "" }, { "docid": "552b72879933d434c2bcbca532c2ce6f", "text": "We present OpenML, a novel open science platform that provides easy access to machine learning data, software and results to encourage further study and application. It organizes all submitted results online so they can be easily found and reused, and features a web API which is being integrated in popular machine learning tools such as Weka, KNIME, RapidMiner and R packages, so that experiments can be shared easily.", "title": "" }, { "docid": "5edbc7588faccbae73037b50316656cb", "text": "Unmanned aerial vehicles (UAVs) are increasingly replacing manned systems in situations that are dangerous, remote, or difficult for manned aircraft to access. Its control tasks are empowered by computer vision technology. Visual sensors are robustly used for stabilization as primary or at least secondary sensors. Hence, UAV stabilization by attitude estimation from visual sensors is a very active research area. Vision based techniques are proving their effectiveness and robustness in handling this problem. In this work a comprehensive review of UAV vision based attitude estimation approaches is covered, starting from horizon based methods and passing by vanishing points, optical flow, and stereoscopic based techniques. A novel segmentation approach for UAV attitude estimation based on polarization is proposed. Our future insightes for attitude estimation from uncalibrated catadioptric sensors are also discussed.", "title": "" }, { "docid": "a83c1f4a17f40d647a263e35f2cc7851", "text": "Designers of human computation systms often face the need to aggregate noisy information provided by multiple people. While voting is often used for this purpose, the choice of voting method is typically not principled. We conduct extensive experiments on Amazon Mechanical Turk to better understand how different voting rules perform in practice. Our empirical conclusions show that noisy human voting can differ from what popular theoretical models would predict. Our short-term goal is to motivate the design of better human computation systems; our long-term goal is to spark an interaction between researchers in (computational) social choice and human computation.", "title": "" }, { "docid": "85566c0da230598e4e3ec3d5428fdac3", "text": "Babesiosis is a tick-borne disease of cattle caused by the protozoan parasites. The causative agents of Babesiosis are specific for particular species of animals. In cattle: B. bovis and B. bigemina are the common species involved in babesiosis. Rhipicephalus (Boophilus) spp., the principal vectors of B. bovis and B. bigemina, are widespread in tropical and subtropical countries. Babesia multiplies in erythrocytes by asynchronous binary fission, resulting in considerable pleomorphism. Babesia produces acute disease by two principle mechanism; hemolysis and circulatory disturbance. Affected animals suffered from marked rise in body temperature, loss of appetite, cessation of rumination, labored breathing, emaciation, progressive hemolytic anemia, various degrees of jaundice (Icterus). Lesions include an enlarged soft and pulpy spleen, a swollen liver, a gall bladder distended with thick granular bile, congested dark-coloured kidneys and generalized anemia and jaundice. The disease can be diagnosis by identification of the agent by using direct microscopic examination, nucleic acid-based diagnostic assays, in vitro culture and animal inoculation as well as serological tests like indirect fluorescent antibody, complement fixation and Enzyme-linked immunosorbent assays tests. Babesiosis occurs throughout the world. However, the distribution of the causative protozoa is governed by the geographical and seasonal distribution of the insect vectors. Recently Babesia becomes the most widespread parasite due to exposure of 400 million cattle to infection through the world, with consequent heavy economic losses such as mortality, reduction in meat and milk yield and indirectly through control measures of ticks. Different researches conducted in Ethiopia reveal the prevalence of the disease in different parts of the country. The most commonly used compounds for the treatment of babesiosis are diminazene diaceturate, imidocarb, and amicarbalide. Active prevention and control of Babesiosis is achieved by three main methods: immunization, chemoprophylaxis and vector control.", "title": "" }, { "docid": "a2cfae4f436a72a0f3896df98c9d14b3", "text": "Empathy--the ability to share the feelings of others--is fundamental to our emotional and social lives. Previous human imaging studies focusing on empathy for others' pain have consistently shown activations in regions also involved in the direct pain experience, particularly anterior insula and anterior and midcingulate cortex. These findings suggest that empathy is, in part, based on shared representations for firsthand and vicarious experiences of affective states. Empathic responses are not static but can be modulated by person characteristics, such as degree of alexithymia. It has also been shown that contextual appraisal, including perceived fairness or group membership of others, may modulate empathic neuronal activations. Empathy often involves coactivations in further networks associated with social cognition, depending on the specific situation and information available in the environment. Empathy-related insular and cingulate activity may reflect domain-general computations representing and predicting feeling states in self and others, likely guiding adaptive homeostatic responses and goal-directed behavior in dynamic social contexts.", "title": "" }, { "docid": "11ae42bedc18dedd0c29004000a4ec00", "text": "A hand injury can have great impact on a person's daily life. However, the current manual evaluations of hand functions are imprecise and inconvenient. In this research, a data glove embedded with 6-axis inertial sensors is proposed. With the proposed angle calculating algorithm, accurate bending angles are measured to estimate the real-time movements of hands. This proposed system can provide physicians with an efficient tool to evaluate the recovery of patients and improve the quality of hand rehabilitation.", "title": "" }, { "docid": "4d8d1ab2ed8a7200bcd95215017b37d4", "text": "We present an automatic system to reconstruct 3D urban models for residential areas from aerial LiDAR scans. The key difference between downtown area modeling and residential area modeling is that the latter usually contains rich vegetation. Thus, we propose a robust classification algorithm that effectively classifies LiDAR points into trees, buildings, and ground. The classification algorithm adopts an energy minimization scheme based on the 2.5D characteristic of building structures: buildings are composed of opaque skyward roof surfaces and vertical walls, making the interior of building structures invisible to laser scans; in contrast, trees do not possess such characteristic and thus point samples can exist underneath tree crowns. Once the point cloud is successfully classified, our system reconstructs buildings and trees respectively, resulting in a hybrid model representing the 3D urban reality of residential areas.", "title": "" } ]
scidocsrr
acf405c82d24dd2057cbd064e2898867
CoFiSet: Collaborative Filtering via Learning Pairwise Preferences over Item-sets
[ { "docid": "91f718a69532c4193d5e06bf1ea19fd3", "text": "Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.\n Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM.", "title": "" }, { "docid": "2d7d20d578573dab8af8aff960010fea", "text": "Two flavors of the recommendation problem are the explicit and the implicit feedback settings. In the explicit feedback case, users rate items and the user-item preference relationship can be modelled on the basis of the ratings. In the harder but more common implicit feedback case, the system has to infer user preferences from indirect information: presence or absence of events, such as a user viewed an item. One approach for handling implicit feedback is to minimize a ranking objective function instead of the conventional prediction mean squared error. The naive minimization of a ranking objective function is typically expensive. This difficulty is usually overcome by a trade-off: sacrificing the accuracy to some extent for computational efficiency by sampling the objective function. In this paper, we present a computationally effective approach for the direct minimization of a ranking objective function, without sampling. We demonstrate by experiments on the Y!Music and Netflix data sets that the proposed method outperforms other implicit feedback recommenders in many cases in terms of the ErrorRate, ARP and Recall evaluation metrics.", "title": "" } ]
[ { "docid": "736a454a8aa08edf645312cecc7925c3", "text": "This paper describes an <i>analogy ontology</i>, a formal representation of some key ideas in analogical processing, that supports the integration of analogical processing with first-principles reasoners. The ontology is based on Gentner's <i>structure-mapping</i> theory, a psychological account of analogy and similarity. The semantics of the ontology are enforced via procedural attachment, using cognitive simulations of structure-mapping to provide analogical processing services. Queries that include analogical operations can be formulated in the same way as standard logical inference, and analogical processing systems in turn can call on the services of first-principles reasoners for creating cases and validating their conjectures. We illustrate the utility of the analogy ontology by demonstrating how it has been used in three systems: A crisis management analogical reasoner that answers questions about international incidents, a course of action analogical critiquer that provides feedback about military plans, and a comparison question-answering system for knowledge capture. These systems rely on large, general-purpose knowledge bases created by other research groups, thus demonstrating the generality and utility of these ideas.", "title": "" }, { "docid": "ca683d498e690198ca433050c3d91fd0", "text": "Cross-graph Relational Learning (CGRL) refers to the problem of predicting the strengths or labels of multi-relational tuples of heterogeneous object types, through the joint inference over multiple graphs which specify the internal connections among each type of objects. CGRL is an open challenge in machine learning due to the daunting number of all possible tuples to deal with when the numbers of nodes in multiple graphs are large, and because the labeled training instances are extremely sparse as typical. Existing methods such as tensor factorization or tensor-kernel machines do not work well because of the lack of convex formulation for the optimization of CGRL models, the poor scalability of the algorithms in handling combinatorial numbers of tuples, and/or the non-transductive nature of the learning methods which limits their ability to leverage unlabeled data in training. This paper proposes a novel framework which formulates CGRL as a convex optimization problem, enables transductive learning using both labeled and unlabeled tuples, and offers a scalable algorithm that guarantees the optimal solution and enjoys a linear time complexity with respect to the sizes of input graphs. In our experiments with a subset of DBLP publication records and an Enzyme multi-source dataset, the proposed method successfully scaled to the large cross-graph inference problem, and outperformed other representative approaches significantly.", "title": "" }, { "docid": "7d08501a0123d773f9fe755f1612e57e", "text": "Language-music comparative studies have highlighted the potential for shared resources or neural overlap in auditory short-term memory. However, there is a lack of behavioral methodologies for comparing verbal and musical serial recall. We developed a visual grid response that allowed both musicians and nonmusicians to perform serial recall of letter and tone sequences. The new method was used to compare the phonological similarity effect with the impact of an operationalized musical equivalent-pitch proximity. Over the course of three experiments, we found that short-term memory for tones had several similarities to verbal memory, including limited capacity and a significant effect of pitch proximity in nonmusicians. Despite being vulnerable to phonological similarity when recalling letters, however, musicians showed no effect of pitch proximity, a result that we suggest might reflect strategy differences. Overall, the findings support a limited degree of correspondence in the way that verbal and musical sounds are processed in auditory short-term memory.", "title": "" }, { "docid": "f24bfd745d9f28a96de1d3a897bf91e6", "text": "In this paper, autoregressive parameter estimation for Kalman filtering speech enhancement is studied. In conventional Kalman filtering speech enhancement, spectral subtraction is usually used for speech autoregressive (AR) parameter estimation. We propose log spectral amplitude (LSA) minimum mean-square error (MMSE) instead of spectral subtraction for the estimation of speech AR parameters. Based on an observation that full-band Kalman filtering speech enhancement often causes an unbalanced noise reduction between speech and non-speech segments, a spectral solution is proposed to overcome the unbalanced reduction of noise. This is done by shaping the spectral envelopes of the noise through likelihood ratio. Our simulation results show the effectiveness of the proposed method.", "title": "" }, { "docid": "ad526a01f76956af87be7287c5cdb964", "text": "Model-based reinforcement learning is a powerful paradigm for learning tasks in robotics. However, in-depth exploration is usually required and the actions have to be known in advance. Thus, we propose a novel algorithm that integrates the option of requesting teacher demonstrations to learn new domains with fewer action executions and no previous knowledge. Demonstrations allow new actions to be learned and they greatly reduce the amount of exploration required, but they are only requested when they are expected to yield a significant improvement because the teacher’s time is considered to be more valuable than the robot’s time. Moreover, selecting the appropriate action to demonstrate is not an easy task, and thus some guidance is provided to the teacher. The rule-based model is analyzed to determine the parts of the state that may be incomplete, and to provide the teacher with a set of possible problems for which a demonstration is needed. Rule analysis is also used to find better alternative models and to complete subgoals before requesting help, thereby minimizing the number of requested demonstrations. These improvements were demonstrated in a set of experiments, which included domains from the international planning competition and a robotic task. Adding teacher demonstrations and rule analysis reduced the amount of exploration required by up to 60% in some domains, and improved the success ratio by 35% in other domains.", "title": "" }, { "docid": "701fb71923bb8a2fc90df725074f576b", "text": "Quantum computing poses challenges to public key signatures as we know them today. LMS and XMSS are two hash based signature schemes that have been proposed in the IETF as quantum secure. Both schemes are based on well-studied hash trees, but their similarities and differences have not yet been discussed. In this work, we attempt to compare the two standards. We compare their security assumptions and quantify their signature and public key sizes. We also address the computation overhead they introduce. Our goal is to provide a clear understanding of the schemes’ similarities and differences for implementers and protocol designers to be able to make a decision as to which standard to chose.", "title": "" }, { "docid": "98c9adda989991cc2d2ddbe27988a2cd", "text": "Multi-user, touch-sensing input devices create opportunities for the use of cooperative gestures -- multi-user gestural interactions for single display groupware. Cooperative gestures are interactions where the system interprets the gestures of more than one user as contributing to a single, combined command. Cooperative gestures can be used to enhance users' sense of teamwork, increase awareness of important system events, facilitate reachability and access control on large, shared displays, or add a unique touch to an entertainment-oriented activity. This paper discusses motivating scenarios for the use of cooperative gesturing and describes some initial experiences with CollabDraw, a system for collaborative art and photo manipulation. We identify design issues relevant to cooperative gesturing interfaces, and present a preliminary design framework. We conclude by identifying directions for future research on cooperative gesturing interaction techniques.", "title": "" }, { "docid": "38be1070365c2c8c2214ff1aafccd8c3", "text": "We investigate the problem of transforming an input sequence into a high-dimensional output sequence in order to transcribe polyphonic audio music into symbolic notation. We introduce a probabilistic model based on a recurrent neural network that is able to learn realistic output distributions given the input and we devise an efficient algorithm to search for the global mode of that distribution. The resulting method produces musically plausible transcriptions even under high levels of noise and drastically outperforms previous state-of- the-art approaches on five datasets of synthesized sounds and real recordings, approximately halving the test error rate.", "title": "" }, { "docid": "cee1d7d199f6122871391112a8ba1c81", "text": "Plagiarism of digital documents seems a serious problem in today's era. Plagiarism refers to the use of someone's data, language and writing without proper acknowledgment of the original source. Plagiarism of another author's original work is one of the biggest problems in publishing, science, and education. Plagiarism can be of different types. This paper presents a different approach for measuring semantic similarity between words and their meanings. Existing systems are based on the traditional approach. For detecting plagiarism, traditional methods focus on text matching according to keywords but fail to detect intelligent plagiarism using semantic web. We have suggested new strategies for detecting the plagiarism in the user document using the semantic web. In paper we have proposed architecture and algorithms to better detection of copy case using semantic search, it can improve the performance of copy case detection system. It analyzes the user document. After the implementation of this technique, the accuracy of plagiarism detection system will surely increase.", "title": "" }, { "docid": "a1d061eb47e1404d2160c5e830229dc1", "text": "Recommendation techniques are very important in the fields of E-commerce and other web-based services. One of the main difficulties is dynamically providing high-quality recommendation on sparse data. In this paper, a novel dynamic personalized recommendation algorithm is proposed, in which information contained in both ratings and profile contents are utilized by exploring latent relations between ratings, a set of dynamic features are designed to describe user preferences in multiple phases, and finally, a recommendation is made by adaptively weighting the features. Experimental results on public data sets show that the proposed algorithm has satisfying performance.", "title": "" }, { "docid": "0bd7c453279c97333e7ac6c52f7127d8", "text": "Among various biometric modalities, signature verification remains one of the most widely used methods to authenticate the identity of an individual. Signature verification, the most important component of behavioral biometrics, has attracted significant research attention over the last three decades. Despite extensive research, the problem still remains open to research due to the variety of challenges it offers. The high intra-class variations in signatures resulting from different physical or mental states of the signer, the differences that appear with aging and the visual similarity in case of skilled forgeries etc. are only a few of the challenges to name. This paper is intended to provide a review of the recent advancements in offline signature verification with a discussion on different types of forgeries, the features that have been investigated for this problem and the classifiers employed. The pros and cons of notable recent contributions to this problem have also been presented along with a discussion of potential future research directions on this subject.", "title": "" }, { "docid": "7e873e837ccc1696eb78639e03d02cae", "text": "Steering is an integral component of adaptive locomotor behavior. Along with reorientation of gaze and body in the direction of intended travel, body center of mass must be controlled in the mediolateral plane. In this study we examine how these subtasks are sequenced when steering is planned early or initiated under time constraints. Whole body kinematics were monitored as individuals were required to change their direction of travel by varying amounts when visually cued either at the beginning of the walk or one stride before. The analyses focused on the transition stride from one travel direction to another. Timing of changes (with respect to first right foot contact) in trunk roll angle, head and trunk yaw angle, and right foot displacement in the mediolateral plane were analyzed. The magnitude of these measures along with right and left foot placement at the beginning and right foot placement at the end of the transition stride were also analyzed. The results show the CNS uses two mechanisms, foot placement and trunk roll motion (piking action about the hip joint in the frontal plane), to move the center of mass towards the new direction of travel in the transition stride, preferring to use the first option when planning can be done early. Control of body center of mass precedes all other changes and is followed by initiation of head reorientation. Only then is the rest of the body reorientation initiated.", "title": "" }, { "docid": "dd0cc729ce33906c31fa48fbc31b23c1", "text": "Firstborn children's reactions to mother-infant and father-infant interaction after a sibling's birth were examined in an investigation of 224 families. Triadic observations of parent-infant-sibling interaction were conducted at 1 month after the birth. Parents reported on children's problem behaviors at 1 and 4 months after the birth and completed the Attachment Q-sort before the birth. Latent profile analysis (LPA) identified 4 latent classes (behavioral profiles) for mother-infant and father-infant interactions: regulated-exploration, disruptive-dysregulated, approach-avoidant, and anxious-clingy. A fifth class, attention-seeking, was found with fathers. The regulated-exploration class was the normative pattern (60%), with few children in the disruptive class (2.7%). Approach-avoidant children had more behavior problems at 4 months than any other class, with the exception of the disruptive children, who were higher on aggression and attention problems. Before the birth, anxious-clingy children had less secure attachments to their fathers than approach avoidant children but more secure attachments to their mothers. Results underscore individual differences in firstborns' behavioral responses to parent-infant interaction and the importance of a person-centered approach for understanding children's jealousy.", "title": "" }, { "docid": "070ffb09caeb20625ca6cea201801b20", "text": "KDD-Cup 2011 challenged the community to identify user tastes in music by leveraging Yahoo! Music user ratings. The competition hosted two tracks, which were based on two datasets sampled from the raw data, including hundreds of millions of ratings. The underlying ratings were given to four types of musical items: tracks, albums, artists, and genres, forming a four level hierarchical taxonomy. The challenge started on March 15, 2011 and ended on June 30, 2011 attracting 2389 participants, 2100 of which were active by the end of the competition. The popularity of the challenge is related to the fact that learning a large scale recommender systems is a generic problem, highly relevant to the industry. In addition, the contest drew interest by introducing a number of scientific and technical challenges including dataset size, hierarchical structure of items, high resolution timestamps of ratings, and a non-conventional ranking-based task. This paper provides the organizers’ account of the contest, including: a detailed analysis of the datasets, discussion of the contest goals and actual conduct, and lessons learned throughout the contest.", "title": "" }, { "docid": "7c82645a48119c4fcfee83ae80caa80e", "text": "For the past few decades, automatic accident detection, especially using video analysis, has become a very important subject. It is important not only for traffic management but also, for Intelligent Transportation Systems (ITS) through its contribution to avoid the escalation of accidents especially on highways. In this paper a novel vision-based road accident detection algorithm on highways and expressways is proposed. This algorithm is based on an adaptive traffic motion flow modeling technique, using Farneback Optical Flow for motions detection and a statistic heuristic method for accident detection. The algorithm was applied on a set of collected videos of traffic and accidents on highways. The results prove the efficiency and practicability of the proposed algorithm using only 240 frames for traffic motion modeling. This method avoids to utilization of a large database while adequate and common accidents videos benchmarks do not exist.", "title": "" }, { "docid": "66f47f612c332ac9e3eee7a4f4024a17", "text": "The welfare of both women and men constitutes the human welfare. At the turn of the century amidst the glory of unprecedented growth in national income, India is experiencing the spread of rural distress. It is mainly due to the collapse of agricultural economy. Structural adjustments and competition from large-scale enterprises result in loss of rural livelihoods. Poor delivery of public services and safety nets, deepen the distress. The adverse impact is more on women than on men. This review examines the adverse impact of the events in terms of endowments, livelihood opportunities and nutritional outcomes on women in detail with the help of chosen indicators at two time-periods roughly representing mid nineties and early 2000. The gender equality index computed and the major indicators of welfare show that the gender gap is increasing in many aspects. All the aspects of livelihoods, such as literacy, unemployment and wages now have larger gender gaps than before. Survival indicators such as juvenile sex ratio, infant mortality, child labour have deteriorated for women, compared to men, though there has been a narrowing of gender gaps in life expectancy and literacy. The overall gender gap has widened due to larger gaps in some indicators, which are not compensated by the smaller narrowing in other indicators both in the rural and urban context.", "title": "" }, { "docid": "9e6c54018ca4d2907aa3a069252c6c53", "text": "Chronic pelvic pain is a frustrating symptom for patients with endometriosis and is frequently refractory to hormonal and surgical management. While these therapies target ectopic endometrial lesions, they do not directly address pain due to central sensitization of the nervous system and myofascial dysfunction, which can continue to generate pain from myofascial trigger points even after traditional treatments are optimized. This article provides a background for understanding how endometriosis facilitates remodeling of neural networks, contributing to sensitization and generation of myofascial trigger points. A framework for evaluating such sensitization and myofascial trigger points in a clinical setting is presented. Treatments that specifically address myofascial pain secondary to spontaneously painful myofascial trigger points and their putative mechanisms of action are also reviewed, including physical therapy, dry needling, anesthetic injections, and botulinum toxin injections.", "title": "" }, { "docid": "cb6d3b025e0047a78c9641d5f10ecf07", "text": "Surgical robotics is an evolving field with great advances having been made over the last decade. The origin of robotics was in the science-fiction literature and from there industrial applications, and more recently commercially available, surgical robotic devices have been realized. In this review, we examine the field of robotics from its roots in literature to its development for clinical surgical use. Surgical mills and telerobotic devices are discussed, as are potential future developments.", "title": "" }, { "docid": "0847b2b9270bc39a1273edfdfa022345", "text": "This paper presents the analysis, design and measurement of novel, low-profile, small-footprint folded monopoles employing planar metamaterial phase-shifting lines. These lines are composed of fully-printed spiral elements, that are inductively coupled and hence exhibit an effective high- mu property. An equivalent circuit for the proposed structure is presented, validating the operating principles of the antenna and the metamaterial line. The impact of the antenna profile and the ground plane size on the antenna performance is investigated using accurate full-wave simulations. A lambda/9 antenna prototype, designed to operate at 2.36 GHz, is fabricated and tested on both electrically large and small ground planes, achieving on average 80% radiation efficiency, 5% (110 MHz) and 2.5% (55 MHz) -10 dB measured bandwidths, respectively, and fully omnidirectional, vertically polarized, monopole-type radiation patterns.", "title": "" }, { "docid": "ef657884e6a7af08ca237cc97a2dfb19", "text": "Bruxism is defined as the repetitive jaw muscle activity characterized by the clenching or grinding of teeth. It can be categorized into awake and sleep bruxism (SB). Frequent SB occurs in about 13% of adults. The exact etiology of SB is still unknown and probably multifactorial in nature. Current literature suggests that SB is regulated centrally (pathophysiological and psychosocial factors) and not peripherally (morphological factors). Cited consequences of SB include temporomandibular disorders, headaches, tooth wear/fracture, implant, and other restoration failure. Chairside recognition of SB involves the use of subjective reports, clinical examinations, and trial oral splints. Definitive diagnosis of SB can only be achieved using electrophysiological tools. Pharmacological, psychological, and dental strategies had been employed to manage SB. There is at present, no effective treatment that \"cures\" or \"stops\" SB permanently. Management is usually directed toward tooth/restoration protection, reduction of bruxism activity, and pain relief.", "title": "" } ]
scidocsrr
2d246f6a8a18f07ec6bb2b3fccbfc95e
A billion keys, but few locks: the crisis of web single sign-on
[ { "docid": "40253c089606f7e9d259818500704c51", "text": "Banks worldwide are starting to authenticate online card transactions using the ‘3-D Secure’ protocol, which is branded as Verified by Visa and MasterCard SecureCode. This has been partly driven by the sharp increase in online fraud that followed the deployment of EMV smart cards for cardholder-present payments in Europe and elsewhere. 3-D Secure has so far escaped academic scrutiny; yet it might be a textbook example of how not to design an authentication protocol. It ignores good design principles and has significant vulnerabilities, some of which are already being exploited. Also, it provides a fascinating lesson in security economics. While other single sign-on schemes such as OpenID, InfoCard and Liberty came up with decent technology they got the economics wrong, and their schemes have not been adopted. 3-D Secure has lousy technology, but got the economics right (at least for banks and merchants); it now boasts hundreds of millions of accounts. We suggest a path towards more robust authentication that is technologically sound and where the economics would work for banks, merchants and customers – given a gentle regulatory nudge.", "title": "" }, { "docid": "9bbf2a9f5afeaaa0f6ca12e86aef8e88", "text": "Phishing is a model problem for illustrating usability concerns of privacy and security because both system designers and attackers battle using user interfaces to guide (or misguide) users.We propose a new scheme, Dynamic Security Skins, that allows a remote web server to prove its identity in a way that is easy for a human user to verify and hard for an attacker to spoof. We describe the design of an extension to the Mozilla Firefox browser that implements this scheme.We present two novel interaction techniques to prevent spoofing. First, our browser extension provides a trusted window in the browser dedicated to username and password entry. We use a photographic image to create a trusted path between the user and this window to prevent spoofing of the window and of the text entry fields.Second, our scheme allows the remote server to generate a unique abstract image for each user and each transaction. This image creates a \"skin\" that automatically customizes the browser window or the user interface elements in the content of a remote web page. Our extension allows the user's browser to independently compute the image that it expects to receive from the server. To authenticate content from the server, the user can visually verify that the images match.We contrast our work with existing anti-phishing proposals. In contrast to other proposals, our scheme places a very low burden on the user in terms of effort, memory and time. To authenticate himself, the user has to recognize only one image and remember one low entropy password, no matter how many servers he wishes to interact with. To authenticate content from an authenticated server, the user only needs to perform one visual matching operation to compare two images. Furthermore, it places a high burden of effort on an attacker to spoof customized security indicators.", "title": "" } ]
[ { "docid": "fea4f8d358afdee5aa9a57cdf19d63a0", "text": "Developers spend significant time reading and navigating code fragments spread across multiple locations. The file-based nature of contemporary IDEs makes it prohibitively difficult to create and maintain a simultaneous view of such fragments. We propose a novel user interface metaphor for code understanding based on collections of lightweight, editable fragments called bubbles, which form concurrently visible working sets. We present the results of a qualitative usability evaluation, and the results of a quantitative study which indicates Code Bubbles significantly improved code understanding time, while reducing navigation interactions over a widely-used IDE, for two controlled tasks.", "title": "" }, { "docid": "aa9a447a4cebaea7995df6954a77cdb5", "text": "Accurately representing the meaning of a piece of text, otherwise known as sentence modelling, is an important component in many natural language inference tasks. We survey the spectrum of these methods, which lie along two dimensions: input representation granularity and composition model complexity. Using this framework, we reveal in our quantitative and qualitative experiments the limitations of the current state-of-the-art model in the context of sentence similarity tasks.", "title": "" }, { "docid": "80fa326191a18172639b705f80809b8c", "text": "Breast cancer represents the most frequently diagnosed cancer in women. Mammography is the most commonly method for early detection of masses related to breast cancer. Correlation of information from multiple-view mammograms improves the performance of diagnosis by radiologists or by computer assisted systems. Detecting the location of masses accurately is highly important for radiologist to classify masses, for the surgeon to help in ease the procedure of an accurate surgery and in radio therapy process for less and efficient dose. In this paper, CAST is developed to accurately locate abnormal masses by quarter and clock segments. CAST will replace the manual localization method. The nipple is used as a reference point in localization process. A new simple nipple detection method is also proposed. The proposed CAST is examined on a new local database which was tested and classified by experts. The methodology achieved a sensitivity of 96% and specificity 73.33%.", "title": "" }, { "docid": "b93c7bf4cfb73a920aa9cd95a11fb182", "text": "In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MSCOCO dataset, FOIL-COCO, which associates images with both correct and ‘foil’ captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake (‘foil word’). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image.", "title": "" }, { "docid": "64b0db1e23b225fab910bef5de9fd921", "text": "Question answering (QA) has become a popular way for humans to access billion-scale knowledge bases. Unlike web search, QA over a knowledge base gives out accurate and concise results, provided that natural language questions can be understood and mapped precisely to structured queries over the knowledge base. The challenge, however, is that a human can ask one question in many different ways. Previous approaches have natural limits due to their representations: rule based approaches only understand a small set of “canned” questions, while keyword based or synonym based approaches cannot fully understand the questions. In this paper, we design a new kind of question representation: templates, over a billion scale knowledge base and a million scale QA corpora. For example, for questions about a city’s population, we learn templates such as What’s the population of $city?, How many people are there in $city?. We learned 27 million templates for 2782 intents. Based on these templates, our QA system KBQA effectively supports binary factoid questions, as well as complex questions which are composed of a series of binary factoid questions. Furthermore, we expand predicates in RDF knowledge base, which boosts the coverage of knowledge base by 57 times. Our QA system beats all other state-of-art works on both effectiveness and efficiency over QALD benchmarks.", "title": "" }, { "docid": "4c4376a25aa61e891294708b753dcfec", "text": "Ransomware, a class of self-propagating malware that uses encryption to hold the victims’ data ransom, has emerged in recent years as one of the most dangerous cyber threats, with widespread damage; e.g., zero-day ransomware WannaCry has caused world-wide catastrophe, from knocking U.K. National Health Service hospitals offline to shutting down a Honda Motor Company in Japan [1]. Our close collaboration with security operations of large enterprises reveals that defense against ransomware relies on tedious analysis from high-volume systems logs of the first few infections. Sandbox analysis of freshly captured malware is also commonplace in operation. We introduce a method to identify and rank the most discriminating ransomware features from a set of ambient (non-attack) system logs and at least one log stream containing both ambient and ransomware behavior. These ranked features reveal a set of malware actions that are produced automatically from system logs, and can help automate tedious manual analysis. We test our approach using WannaCry and two polymorphic samples by producing logs with Cuckoo Sandbox during both ambient, and ambient plus ransomware executions. Our goal is to extract the features of the malware from the logs with only knowledge that malware was present. We compare outputs with a detailed analysis of WannaCry allowing validation of the algorithm’s feature extraction and provide analysis of the method’s robustness to variations of input data—changing quality/quantity of ambient data and testing polymorphic ransomware. Most notably, our patterns are accurate and unwavering when generated from polymorphic WannaCry copies, on which 63 (of 63 tested) antivirus (AV) products fail.", "title": "" }, { "docid": "e3e024fa2ee468fb2a64bfc8ddf69467", "text": "We used two methods to estimate short-wave (S) cone spectral sensitivity. Firstly, we measured S-cone thresholds centrally and peripherally in five trichromats, and in three blue-cone monochromats, who lack functioning middle-wave (M) and long-wave (L) cones. Secondly, we analyzed standard color-matching data. Both methods yielded equivalent results, on the basis of which we propose new S-cone spectral sensitivity functions. At short and middle-wavelengths, our measurements are consistent with the color matching data of Stiles and Burch (1955, Optica Acta, 2, 168-181; 1959, Optica Acta, 6, 1-26), and other psychophysically measured functions, such as pi 3 (Stiles, 1953, Coloquio sobre problemas opticos de la vision, 1, 65-103). At longer wavelengths, S-cone sensitivity has previously been over-estimated.", "title": "" }, { "docid": "8e25d7b5b468c008b18634937e8f3204", "text": "For personal protection against mosquito bites, user-friendly natural repellents, particularly from plant origin, are considered as a potential alternative to applications currently based on synthetics such as DEET, the standard chemical repellent. This study was carried out in Thailand to evaluate the repellency of Ligusticum sinense hexane extract (LHE) against laboratory Anopheles minimus and Aedes aegypti, the primary vectors of malaria and dengue fever, respectively. Repellent testing of 25% LHE against the two target mosquitoes; An. minimus and Ae. aegypti, was performed and compared to the standard repellent, DEET, with the assistance of six human volunteers of either sex under laboratory conditions. The physical and biological stability of LHE also was determined after keeping it in conditions that varied in temperature and storage time. Finally, LHE was analysed chemically using the qualitative GC/MS technique in order to demonstrate a profile of chemical constituents. Ethanol preparations of LHE, with and without 5% vanillin, demonstrated a remarkably effective performance when compared to DEET in repelling both An. minimus and Ae. aegypti. While 25% LHE alone provided median complete-protection times against An. minimus and Ae. aegypti of 11.5 (9.0–14.0) hours and 6.5 (5.5–9.5) hours, respectively, the addition of 5% vanillin increased those times to 12.5 (9.0–16.0) hours and 11.0 (7.0–13.5) hours, respectively. Correspondingly, vanillin added to 25% DEET also extended the protection times from 11.5 (10.5–15.0) hours to 14.25 (11.0–18.0) hours and 8.0 (5.0–9.5) hours to 8.75 (7.5–11.0) hours against An. minimus and Ae. aegypti, respectively. No local skin reaction such as rash, swelling or irritation was observed during the study period. Although LHE samples kept at ambient temperature (21–35°C), and 45°C for 1, 2 and 3 months, demonstrated similar physical characteristics, such as similar viscosity and a pleasant odour, to those that were fresh and stored at 4°C, their colour changed from light- to dark-brown. Interestingly, repellency against Ae. aegypti of stored LHE was presented for a period of at least 3 months, with insignificantly varied efficacy. Chemical analysis revealed that the main components of LHE were 3-N-butylphthalide (31.46%), 2, 5-dimethylpyridine (21.94%) and linoleic acid (16.41%), constituting 69.81% of all the extract composition. LHE with proven repellent efficacy, no side effects on the skin, and a rather stable state when kept in varied conditions is considered to be a potential candidate for developing a new natural alternative to DEET, or an additional weapon for integrated vector control when used together with other chemicals/measures.", "title": "" }, { "docid": "78007b3276e795d76b692b40c4808c51", "text": "The construct of trait emotional intelligence (trait EI or trait emotional self-efficacy) provides a comprehensive operationalization of emotion-related self-perceptions and dispositions. In the first part of the present study (N=274, 92 males), we performed two joint factor analyses to determine the location of trait EI in Eysenckian and Big Five factor space. The results showed that trait EI is a compound personality construct located at the lower levels of the two taxonomies. In the second part of the study, we performed six two-step hierarchical regressions to investigate the incremental validity of trait EI in predicting, over and above the Giant Three and Big Five personality dimensions, six distinct criteria (life satisfaction, rumination, two adaptive and two maladaptive coping styles). Trait EI incrementally predicted four criteria over the Giant Three and five criteria over the Big Five. The discussion addresses common questions about the operationalization of emotional intelligence as a personality trait.", "title": "" }, { "docid": "dc9dc86d2ff5775636fa2bc00369a110", "text": "Using a cognitive linguistics perspective, this book provides the most comprehensive theoretical analysis of the semantics of English prepositions available. All English prepositions originally coded spatial relations between two physical entities; while retaining their original meaning, prepositions have also developed a rich set of non-spatial meanings. In this innovative study, Tyler and Evans argue that all these meanings are systematically grounded in the nature of human spatio-physical experience. The original ‘spatial scenes’ provide the foundation for the extension of meaning from the spatial to the more abstract. This analysis introduces a new methodology that distinguishes between a conventional meaning and an interpretation produced for understanding the preposition in context, as well as establishing which of several competing senses should be taken as the primary sense. Together, the methodology and framework are sufficiently articulated to generate testable predictions and allow the analysis to be applied to additional prepositions.", "title": "" }, { "docid": "7c36d7f2a9604470e0e97bd2425bbf0c", "text": "Gamification, the use of game mechanics in non-gaming applications, has been applied to various systems to encourage desired user behaviors. In this paper, we examine patterns of user activity in an enterprise social network service after the removal of a points-based incentive system. Our results reveal that the removal of the incentive scheme did reduce overall participation via contribution within the SNS. We also describe the strategies by point leaders and observe that users geographically distant from headquarters tended to comment on profiles outside of their home country. Finally, we describe the implications of the removal of extrinsic rewards, such as points and badges, on social software systems, particularly those deployed within an enterprise.", "title": "" }, { "docid": "fe759d1674a09bb5b48f7645fe2f2ced", "text": "Conceptualization (AC)", "title": "" }, { "docid": "1014a09fbded05ab4eb2438aa3631d2d", "text": "In the last decade, self-myofascial release has become an increasingly common modality to supplement traditional methods of massage, so a masseuse is not necessary. However, there are limited clinical data demonstrating the efficacy or mechanism of this treatment on athletic performance. The purpose of this study was to determine whether the use of myofascial rollers before athletic tests can enhance performance. Twenty-six (13 men and 13 women) healthy college-aged individuals (21.56 ± 2.04 years, 23.97 ± 3.98 body mass index, 20.57 ± 12.21 percent body fat) were recruited. The study design was a randomized crossover design in which subject performed a series of planking exercises or foam rolling exercises and then performed a series of athletic performance tests (vertical jump height and power, isometric force, and agility). Fatigue, soreness, and exertion were also measured. A 2 × 2 (trial × gender) analysis of variance with repeated measures and appropriate post hoc was used to analyze the data. There were no significant differences between foam rolling and planking for all 4 of the athletic tests. However, there was a significant difference between genders on all the athletic tests (p ≤ 0.001). As expected, there were significant increases from pre to post exercise during both trials for fatigue, soreness, and exertion (p ≤ 0.01). Postexercise fatigue after foam rolling was significantly less than after the subjects performed planking (p ≤ 0.05). The reduced feeling of fatigue may allow participants to extend acute workout time and volume, which can lead to chronic performance enhancements. However, foam rolling had no effect on performance.", "title": "" }, { "docid": "d7ea5e0bdf811f427b7c283d4aae7371", "text": "This work investigates the development of students’ computational thinking (CT) skills in the context of educational robotics (ER) learning activity. The study employs an appropriate CT model for operationalising and exploring students’ CT skills development in two different age groups (15 and 18 years old) and across gender. 164 students of different education levels (Junior high: 89; High vocational: 75) engaged in ER learning activities (2 hours per week, 11 weeks totally) and their CT skills were evaluated at different phases during the activity, using different modality (written and oral) assessment tools. The results suggest that: (a) students reach eventually the same level of CT skills development independent of their age and gender, (b) CT skills inmost cases need time to fully develop (students’ scores improve significantly towards the end of the activity), (c) age and gender relevant differences appear when analysing students’ score in the various specific dimensions of the CT skills model, (d) the modality of the skill assessment instrumentmay have an impact on students’ performance, (e) girls appear inmany situations to need more training time to reach the same skill level compared to boys. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "aca770fa21637483c3ef0d028f8d3b64", "text": "In the analysis of bibliometric networks, researchers often use mapping and clustering techniques in a combined fashion. Typically, however, mapping and clustering techniques that are used together rely on very different ideas and assumptions. We propose a unified approach to mapping and clustering of bibliometric networks. We show that the VOS mapping technique and a weighted and parameterized variant of modularity-based clustering can both be derived from the same underlying principle. We illustrate our proposed approach by producing a combined mapping and clustering of the most frequently cited publications that appeared in the field of information science in the period 1999–2008.", "title": "" }, { "docid": "f645e9b0e66f06959313678acabaa186", "text": "Automatic summarization techniques facilitate multimedia indexing and access by reducing the content of a given item to its essential parts. However, novel approaches for summarization should be developed since existing methods cannot offer a general and unobtrusive solution. Considering that the consumption of multimedia data is more and more social, we propose to use a physiological index of social interaction, namely, physiological linkage, to determine general highlights of videos. The proposed method detects highlights which are relevant to the majority of viewers without requiring them any conscious effort. Experimental testing has demonstrated the validity of the proposed system which obtained a classification accuracy of up to 78.2%. CHENES, Christophe, et al. Highlights Detection in Movie Scenes Through Inter-Users Physiological Linkage. In: N. Ramzan and R. van Zwol and J.-S Lee and K. Clüver and X.-S Hua. Social media retrieval. Springer, 2013. DOI : 10.1007/978-1-4471-4555-4_10", "title": "" }, { "docid": "fb128fdbd2975edee014ad86113595dd", "text": "Recurrent neural networks have become ubiquitous in computing representations of sequential data, especially textual data in natural language processing. In particular, Bidirectional LSTMs are at the heart of several neural models achieving state-of-the-art performance in a wide variety of tasks in NLP. However, BiLSTMs are known to suffer from sequential bias – the contextual representation of a token is heavily influenced by tokens close to it in a sentence. We propose a general and effective improvement to the BiLSTM model which encodes each suffix and prefix of a sequence of tokens in both forward and reverse directions. We call our model Suffix Bidirectional LSTM or SuBiLSTM. This introduces an alternate bias that favors long range dependencies. We apply SuBiLSTMs to several tasks that require sentence modeling. We demonstrate that using SuBiLSTM instead of a BiLSTM in existing models leads to improvements in performance in learning general sentence representations, text classification, textual entailment and paraphrase detection. Using SuBiLSTM we achieve new state-of-the-art results for fine-grained sentiment classification and question classification.", "title": "" }, { "docid": "a6c1df858f05972157f6b53314582d39", "text": "Dissecting cellulitis (DC) also referred to as to as perifolliculitis capitis abscedens et suffodiens (Hoffman) manifests with perifollicular pustules, nodules, abscesses and sinuses that evolve into scarring alopecia. In the U.S., it predominantly occurs in African American men between 20-40 years of age. DC also occurs in other races and women more rarely. DC has been reported worldwide. Older therapies reported effective include: low dose oral zinc, isotretinoin, minocycline, sulfa drugs, tetracycline, prednisone, intralesional triamcinolone, incision and drainage, dapsone, antiandrogens (in women), topical clindamycin, topical isotretinoin, X-ray epilation and ablation, ablative C02 lasers, hair removal lasers (800nm and 694nm), and surgical excision. Newer treatments reported include tumor necrosis factor blockers (TNFB), quinolones, macrolide antibiotics, rifampin, alitretinoin, metronidazole, and high dose zinc sulphate (135-220 mg TID). Isotretinoin seems to provide the best chance at remission, but the number of reports is small, dosing schedules variable, and the long term follow up beyond a year is negligible; treatment failures have been reported. TNFB can succeed when isotretinoin fails, either as monotherapy, or as a bridge to aggressive surgical treatment, but long term data is lacking. Non-medical therapies noted in the last decade include: the 1064 nm laser, ALA-PDT, and modern external beam radiation therapy. Studies that span more than 1 year are lacking. Newer pathologic hair findings include: pigmented casts, black dots, and \"3D\" yellow dots. Newer associations include: keratitis-ichthyosis-deafness syndrome, Crohn disease and pyoderma gangrenosum. Older associations include arthritis and keratitis. DC is likely a reaction pattern, as is shown by its varied therapeutic successes and failures. The etiology of DC remains enigmatic and DC is distinct from hidradenitis suppurativa, which is shown by their varied responses to therapies and their histologic differences. Like HS, DC likely involves both follicular dysfunction and an aberrant cutaneous immune response to commensal bacteria, such as coagulase negative staphylococci. The incidence of DC is likely under-reported. The literature suggests that now most cases of DC can be treated effectively. However, the lack of clinical studies regarding DC prevents full understanding of the disease and limits the ability to define a consensus treatment algorithm.", "title": "" }, { "docid": "5ffb3e630e5f020365e471e94d678cbb", "text": "This paper presents one perspective on recent developments related to software engineering in the industrial automation sector that spans from manufacturing factory automation to process control systems and energy automation systems. The survey's methodology is based on the classic SWEBOK reference document that comprehensively defines the taxonomy of software engineering domain. This is mixed with classic automation artefacts, such as the set of the most influential international standards and dominating industrial practices. The survey focuses mainly on research publications which are believed to be representative of advanced industrial practices as well.", "title": "" }, { "docid": "892766e9b3ecda39e6dfa3b776acf248", "text": "Although computing technology has made inroads into home environments, it has yet to instigate a major shift in the design of homes or home activities. The convergence of television and the Internet is lagging behind expectations, and the combination of desktop computers, entertainment consoles, televisions, and cell phones has yet to form a cohesive whole. One possible reason for this lag in progress is that these technologies don't address a coherent need -they merely augment current entertainment and communication practices. We base our research on the premise that the next revolution of technology in the home will arise from devices that help older adults maintain their independence. A coherent suite of technologies will eventually let older adults be proactive about their own healthcare, will aid them in daily activities and help them learn new skills, will create new avenues for social communication, and will help ensure their safety and well being. The Aware Home Research initiative hopes to help older adults \"age in place\" by creating devices that can assist with daily tasks, offer memory support, and monitor daily activities. However, understanding user needs and attitudes is essential to deploying this technology.", "title": "" } ]
scidocsrr
ee707e42a74a7b52a8edc257e3d8a0f5
Lexical pragmatics , ad hoc concepts and metaphor : A Relevance Theory perspective
[ { "docid": "9aaeb35eb5c387e1eb2c2cffd677dbd8", "text": "In this paper I discuss some general problems one is confronted with when trying to analyze the utterance of words within concrete conceptual and contextual settings and to go beyond the aspects of meaning typically investigated by a contrastive analysis of lexemes within the Katz-Fodor tradition of semantics. After emphasizing some important consequences of the traditional view, several phenomena are collected that seem to conflict with the theoretical settings made by it. Some extensions of the standard theory are outlined that take a broader view of language interpretation and claim to include pragmatic aspects of (utterance) meaning. The models critically considered include Bartsch's indexical theory of polysemy, Bierwisch's two-level semantics and Pustejovsky's generative lexicon. Finally, I argue in favor of a particular account of the division of labor between lexical semantics and pragmatics. This account combines the idea of (radical) semantic underspecification in the lexicon with a theory of pragmatic strengthening (based on conversational implicatures).", "title": "" } ]
[ { "docid": "253a4482b462b134f915d89cbc57577a", "text": "Ontology is one of the essential topics in the scope of an important area of current computer science and Semantic Web. Ontologies present well defined, straightforward and standardized form of the repositories (vast and reliable knowledge) where it can be interoperable and machine understandable. There are many possible utilization of ontologies from automatic annotation of web resources to domain representation and reasoning task. Ontology is an effective conceptualism used for the semantic web. However there is none of the research try to construct an ontology from Islamic knowledge which consist of Holy Quran, Hadiths and etc. Therefore as a first stage, in this paper we try to propose a simple methodology in order to extract a concept based on Al-Quran. Finally, we discuss about the experiment that have been conducted.", "title": "" }, { "docid": "108f65d148514e621b44d32467df41df", "text": "DNS tunnels allow circumventing access and security policies in firewalled networks. Such a security breach can be misused for activities like free web browsing, but also for command & control traffic or cyber espionage, thus motivating the search for effective automated DNS tunnel detection techniques. In this paper we develop such a technique, based on the monitoring and analysis of network flows. Our methodology combines flow information with statistical methods for anomaly detection. The contribution of our paper is twofold. Firstly, based on flow-derived variables that we identified as indicative of DNS tunnelling activities, we identify and evaluate a set of non-parametrical statistical tests that are particularly useful in this context. Secondly, the efficacy of the resulting tests is demonstrated by extensive validation experiments in an operational environment, covering many different usage scenarios.", "title": "" }, { "docid": "000bdac12cd4254500e22b92b1906174", "text": "In this paper we address the topic of generating automatically accurate, meaning preserving and syntactically correct paraphrases of natural language sentences. The design of methods and tools for paraphrasing natural language text is a core task of natural language processing and is quite useful in many applications and procedures. We present a methodology and a tool developed that performs deep analysis of natural language sentences and generate paraphrases of them. The tool performs deep analysis of the natural language sentence and utilizes sets of paraphrasing techniques that can be used to transform structural parts of the dependency tree of a sentence to an equivalent form and also change sentence words with their synonyms and antonyms. In the evaluation study the performance of the method is examined and the accuracy of the techniques is assessed in terms of syntactic correctness and meaning preserving. The results collected are very promising and show the method to be accurate and able to generate quality paraphrases.", "title": "" }, { "docid": "2a7de9a210dd074caebeef62d0a56700", "text": "We describe a new algorithm to enumerate the k shortest simple (loopless) paths in a directed graph and report on its implementation. Our algorithm is based on a replacement paths algorithm proposed by Hershberger and Suri [2001], and can yield a factor Θ(n) improvement for this problem. But there is a caveat: The fast replacement paths subroutine is known to fail for some directed graphs. However, the failure is easily detected, and so our k shortest paths algorithm optimistically uses the fast subroutine, then switches to a slower but correct algorithm if a failure is detected. Thus, the algorithm achieves its Θ(n) speed advantage only when the optimism is justified. Our empirical results show that the replacement paths failure is a rare phenomenon, and the new algorithm outperforms the current best algorithms; the improvement can be substantial in large graphs. For instance, on GIS map data with about 5,000 nodes and 12,000 edges, our algorithm is 4--8 times faster. In synthetic graphs modeling wireless ad hoc networks, our algorithm is about 20 times faster.", "title": "" }, { "docid": "bc1d4ce838971d6a04d5bf61f6c3f2d8", "text": "This paper presents a novel network slicing management and orchestration architectural framework. A brief description of business scenarios and potential customers of network slicing is provided, illustrating the need for ordering network services with very different requirements. Based on specific customer goals (of ordering and building an end-to-end network slice instance) and other requirements gathered from industry and standardization associations, a solution is proposed enabling the automation of end-to-end network slice management and orchestration in multiple resource domains. This architecture distinguishes between two main design time and runtime components: Network Slice Design and Multi-Domain Orchestrator, belonging to different competence service areas with different players in these domains, and proposes the required interfaces and data structures between these components.", "title": "" }, { "docid": "47aeee7c9d1208302cfc7d779a090df9", "text": "Much of learning and reasoning occurs in pedagogical situations--situations in which a person who knows a concept chooses examples for the purpose of helping a learner acquire the concept. We introduce a model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teacher's examples. We present three experiments testing the model predictions for rule-based, prototype, and causally structured concepts. The model shows good quantitative and qualitative fits to the data across all three experiments, predicting novel qualitative phenomena in each case. We conclude by discussing implications for understanding concept learning and implications for theoretical claims about the role of pedagogy in human learning.", "title": "" }, { "docid": "96d6173f58e36039577c8e94329861b2", "text": "Reverse Turing tests, or CAPTCHAs, have become an ubiquitous defense used to protect open Web resources from being exploited at scale. An effective CAPTCHA resists existing mechanistic software solving, yet can be solved with high probability by a human being. In response, a robust solving ecosystem has emerged, reselling both automated solving technology and realtime human labor to bypass these protections. Thus, CAPTCHAs can increasingly be understood and evaluated in purely economic terms; the market price of a solution vs the monetizable value of the asset being protected. We examine the market-side of this question in depth, analyzing the behavior and dynamics of CAPTCHA-solving service providers, their price performance, and the underlying labor markets driving this economy.", "title": "" }, { "docid": "ecb146ae27419d9ca1911dc4f13214c1", "text": "In this paper, a simple mix integer programming for distribution center location is proposed. Based on this simple model, we introduce two important factors, transport mode and carbon emission, and extend it a model to describe the location problem for green supply chain. Sequently, IBM Watson implosion technologh (WIT) tool was introduced to describe them and solve them. By changing the price of crude oil, we illustrate the its impact on distribution center locations and transportation mode option for green supply chain. From the cases studies, we have known that, as the crude oil price increasing, the profits of the whole supply chain will decrease, carbon emission will also decrease to some degree, while the number of opened distribution center will increase.", "title": "" }, { "docid": "5a248466c2e82b8453baa483a05bc25b", "text": "Early severe stress and maltreatment produces a cascade of neurobiological events that have the potential to cause enduring changes in brain development. These changes occur on multiple levels, from neurohumoral (especially the hypothalamic-pituitary-adrenal [HPA] axis) to structural and functional. The major structural consequences of early stress include reduced size of the mid-portions of the corpus callosum and attenuated development of the left neocortex, hippocampus, and amygdala. Major functional consequences include increased electrical irritability in limbic structures and reduced functional activity of the cerebellar vermis. There are also gender differences in vulnerability and functional consequences. The neurobiological sequelae of early stress and maltreatment may play a significant role in the emergence of psychiatric disorders during development.", "title": "" }, { "docid": "2392f4dcf3486956d2c08c621492b715", "text": "As urban population grows, cities face many challenges related to transportation, resource consumption, and the environment. Ride sharing has been proposed as an effective approach to reduce traffic congestion, gasoline consumption, and pollution. Despite great promise, researchers and policy makers lack adequate tools to assess tradeoffs and benefits of various ride-sharing strategies. Existing approaches either make unrealistic modeling assumptions or do not scale to the sizes of existing data sets. In this paper, we propose a real-time, data-driven simulation framework that supports the efficient analysis of taxi ride sharing. By modeling taxis and trips as distinct entities, our framework is able to simulate a rich set of realistic scenarios. At the same time, by providing a comprehensive set of parameters, we are able to study the taxi ride-sharing problem from different angles, considering different stakeholders' interests and constraints. To address the computational complexity of the model, we describe a new optimization algorithm that is linear in the number of trips and makes use of an efficient indexing scheme, which combined with parallelization, makes our approach scalable. We evaluate our framework and algorithm using real data - 360 million trips taken by 13,000 taxis in New York City during 2011 and 2012. The results demonstrate that our framework is effective and can provide insights into strategies for implementing city-wide ride-sharing solutions. We describe the findings of the study as well as a performance analysis of the model.", "title": "" }, { "docid": "4a12c2cc8458566123de02177efd73d0", "text": "This paper presents a hybrid approach to face detection and feature extraction. The remarkable advancement in technology has enhanced the use of more accurate and precise methods to detect faces. This paper presents a combination of three well known algorithms ViolaJones face detection framework, Neural Networks and Canny edge detection method to detect face in static images. The proposed work emphasizes on the face detection and identification using Viola-Jones algorithm which is a real time face detection system. Neural Networks will be used as a classifier between faces and non-faces. Canny edge detection method is an efficient method for detecting boundaries on a face in this proposed work. The Canny edge detector is primarily useful to locate sharp intensity changes and to find object boundaries in an image.", "title": "" }, { "docid": "9ac90eeb0dec90578e060828b210a120", "text": "Computer networks are limited in performance by the electronic equipment. Terminals have received little attention, but need to be redesigned in order to be able to manage 10 Gigabit Ethernet. The Internet checksum computation, which is used in TCP and UDP requires specialized processing resources. The TUCFP hardware accelerator calculates the Internet checksum. It processes 32 bits in parallel and is designed for easy integration in the general purpose protocol processor. It handles UDP as well as TCP packets in both IPv4 and IPv6 environments. A synthesized implementation for 0.18 micron technology proves a throughput of over 12 Gigabits/s.", "title": "" }, { "docid": "394c8f7a708d69ca26ab0617ab1530ab", "text": "Developing wireless sensor networks can enable information gathering, information processing and reliable monitoring of a variety of environments for both civil and military applications. It is however necessary to agree upon a basic architecture for building sensor network applications. This paper presents a general classification of sensor network applications based on their network configurations and discusses some of their architectural requirements. We propose a generic architecture for a specific subclass of sensor applications which we define as self-configurable systems where a large number of sensors coordinate amongst themselves to achieve a large sensing task. Throughout this paper we assume a certain subset of the sensors to be immobile. This paper lists the general architectural and infra-structural components necessary for building this class of sensor applications. Given the various architectural components, we present an algorithm that self-organizes the sensors into a network in a transparent manner. Some of the basic goals of our algorithm include minimizing power utilization, localizing operations and tolerating node and link failures.", "title": "" }, { "docid": "cb7e4299f0994d2fe37ea2f1dc382610", "text": "This paper presents a quick and accurate power control method for a zone-control induction heating (ZCIH) system. The ZCIH system consists of multiple working coils connected to multiple H-bridge inverters. The system controls the amplitude and phase angle of each coil current to make the temperature distribution on the workpiece uniform. This paper proposes a new control method for the coil currents based on a circuit model using real and imaginary (Re-Im) current/voltage components. The method detects and controls the Re-Im components of the coil current instead of the current amplitude and phase angle. As a result, the proposed method enables decoupling control for the system, making the control for each working coil independent from the others. Experiments on a 6-zone ZCIH laboratory setup are conducted to verify the validity of the proposed method. It is clarified that the proposed method has a stable operation both in transient and steady states. The proposed system and control method enable system complexity reduction and control stability improvements.", "title": "" }, { "docid": "d730fb49b7b6f971593e7e116e0c48bf", "text": "Modern image and video compression techniques today offer the possibility to store or transmit the vast amount of data necessary to represent digital images and video in an efficient and robust way. New audio visual applications in the field of communication, multimedia and broadcasting became possible based on digital video coding technology. As manifold as applications for image coding are today, as manifold are the different approaches and algorithms and were the first hardware implementations and even systems in the commercial field, such as private teleconferencing systems [chen, hal]. However, with the advances in VLSI-technology it became possible to open more application fields to a larger number of users and therefore the necessity for video coding standards arose. Commercially, international standardization of video communication systems and protocols aims to serve two important purposes: interoperability and economy of scale. Interworking between video communication equipment from different vendors is a desirable feature for users and equipment manufactures alike. It increases the attractiveness for buying and using video", "title": "" }, { "docid": "71bc346237c5f97ac245dd7b7bbb497f", "text": "Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.", "title": "" }, { "docid": "6b3abd92478a641d992ed4f4f08f52d5", "text": "In this article, we consider the robust estimation of a location parameter using Mestimators. We propose here to couple this estimation with the robust scale estimate proposed in [Dahyot and Wilson, 2006]. The resulting procedure is then completely unsupervised. It is applied to camera motion estimation and moving object detection in videos. Experimental results on different video materials show the adaptability and the accuracy of this new robust approach.", "title": "" }, { "docid": "28c03f6fb14ed3b7d023d0983cb1e12b", "text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "title": "" }, { "docid": "9787baec91ed7ba70c7c8b7fd64a7e92", "text": "Today's popular web search engines expand the search process beyond crawled web pages to specialized corpora (\"verticals\") like images, videos, news, local, sports, finance, shopping etc., each with its own specialized search engine. Search federation deals with problems of the selection of search engines to query and merging of their results into a single result set. Despite a few recent advances, the problem is still very challenging. First, due to the heterogeneous nature of different verticals, how the system merges the vertical results with the web documents to serve the user's information need is still an open problem. Moreover, the scale of the search engine and the increasing number of vertical properties requires a solution which is efficient and scaleable. In this paper, we propose a unified framework for the search federation problem. We model the search federation as a contextual bandit problem. The system uses reward as a proxy for user satisfaction. Given a query, our system predicts the expected reward for each vertical, then organizes the search result page (SERP) in a way which maximizes the total reward. Instead of relying on human judges, our system leverages implicit user feedback to learn the model. The method is efficient to implement and can be applied to verticals of different nature. We have successfully deployed the system to three different markets, and it handles multiple verticals in each market. The system is now serving hundreds of millions of queries live each day, and has improved user metrics considerably.", "title": "" }, { "docid": "842ee1e812d408df7e6f7dfd95e32a36", "text": "Abstract Phase segregation, the process by which the components of a binary mixture spontaneously separate, is a key process in the evolution and design of many chemical, mechanical, and biological systems. In this work, we present a data-driven approach for the learning, modeling, and prediction of phase segregation. A direct mapping between an initially dispersed, immiscible binary fluid and the equilibrium concentration field is learned by conditional generative convolutional neural networks. Concentration field predictions by the deep learning model conserve phase fraction, correctly predict phase transition, and reproduce area, perimeter, and total free energy distributions up to 98% accuracy.", "title": "" } ]
scidocsrr
c3e29b5e72dee89f546eefd5a1255ad5
Sleep Quality and Academic Performance in University Students : A Wake-Up Call for College Psychologists
[ { "docid": "06e74a431b45aec75fb21066065e1353", "text": "Despite the prevalence of sleep complaints among psychiatric patients, few questionnaires have been specifically designed to measure sleep quality in clinical populations. The Pittsburgh Sleep Quality Index (PSQI) is a self-rated questionnaire which assesses sleep quality and disturbances over a 1-month time interval. Nineteen individual items generate seven \"component\" scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. The sum of scores for these seven components yields one global score. Clinical and clinimetric properties of the PSQI were assessed over an 18-month period with \"good\" sleepers (healthy subjects, n = 52) and \"poor\" sleepers (depressed patients, n = 54; sleep-disorder patients, n = 62). Acceptable measures of internal homogeneity, consistency (test-retest reliability), and validity were obtained. A global PSQI score greater than 5 yielded a diagnostic sensitivity of 89.6% and specificity of 86.5% (kappa = 0.75, p less than 0.001) in distinguishing good and poor sleepers. The clinimetric and clinical properties of the PSQI suggest its utility both in psychiatric clinical practice and research activities.", "title": "" } ]
[ { "docid": "f7af51813b7125d31674d506778f52b0", "text": "Monaural speech separation in reverberant conditions is very challenging. In masking-based separation, features extracted from speech mixtures are employed to predict a time-frequency mask. Robust feature extraction is crucial for the performance of supervised speech separation in adverse acoustic environments. Using objective speech intelligibility as the metric, we investigate a wide variety of monaural features in low signalto-noise ratios and moderate to high reverberation. Deep neural networks are employed as the learning machine in our feature investigation. We find considerable performance gain using a contextual window in reverberant speech processing, likely due to temporal structure of reverberation. In addition, we systematically evaluate feature combinations. In unmatched noise and reverberation conditions, the resulting feature set from this study substantially outperforms previously employed sets for speech separation in anechoic conditions.", "title": "" }, { "docid": "91c9dcfd3428fb79afd8d99722c95b69", "text": "In this article we describe results of our research on the disambiguation of user queries using ontologies for categorization. We present an approach to cluster search results by using classes or “Sense Folders” ~prototype categories! derived from the concepts of an assigned ontology, in our case WordNet. Using the semantic relations provided from such a resource, we can assign categories to prior, not annotated documents. The disambiguation of query terms in documents with respect to a user-specific ontology is an important issue in order to improve the retrieval performance for the user. Furthermore, we show that a clustering process can enhance the semantic classification of documents, and we discuss how this clustering process can be further enhanced using only the most descriptive classes of the ontology. © 2006 Wiley Periodicals, Inc.", "title": "" }, { "docid": "cdb937def5a92e3843a761f57278783e", "text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.", "title": "" }, { "docid": "d5fe167c68d8393ca8945ffb40b96bd2", "text": "Urban social structure or the spatial arrangement of social groups in cities has long been the subject of scholarly attention in urban studies from a variety of perspectives. Such attention has focused primarily on understanding the process and forces that give rise to structure, to some extent on the adverse consequences of socially differentiated or even polarised cities, and on policy to address these consequences or to socially engineer urban structure. Development firms are the key entrepreneurs who build the new urban fabric on which socio-spatial differentiation takes place. Through the processes of targeting specific market segments they play a pivotal role in shaping urban social structure by providing groups of specific types of residential development tailored to specific market groups in specific locations. Yet despite the long history of study of urban social structure, existing approaches have afforded little and insufficient attention to the role of the development industry in shaping urban social space. This paper makes the case that an approach that focuses on the development industry role is needed to complement existing perspectives and because it is highly relevant if not necessary for effective policy making. Moreover, this focus is increasingly important and relevant in a contemporary context where the nature of urban development is changing and in which the decisions of private sector players play an increasing role in shaping structure. This paper outlines the desirable qualities of such an approach including the need to address both structure and agency.", "title": "" }, { "docid": "86b8f11b19fec6a120edddc12e107215", "text": "This paper presents the design procedure, optimization strategy, theoretical analysis, and experimental results of a wideband dual-polarized base station antenna element with superior performance. The proposed antenna element consists of four electric folded dipoles arranged in an octagon shape that are excited simultaneously for each polarization. It provides ±45° slant-polarized radiation that meets all the requirements for base station antenna elements, including stable radiation patterns, low cross polarization level, high port-to-port isolation, and excellent matching across the wide band. The problem of beam squint for beam-tilted arrays is discussed and it is found that the geometry of this element serves to reduce beam squint. Experimental results show that this element has a wide bandwidth of 46.4% from 1.69 to 2.71 GHz with ≥15-dB return loss and 9.8 ± 0.9-dBi gain. Across this wide band, the variations of the half-power-beamwidths of the two polarizations are all within 66.5° ± 5.5°, the port-to-port isolation is >28 dB, the cross-polarization discrimination is >25 dB, and most importantly, the beam squint is <4° with a maximum 10° down-tilt.", "title": "" }, { "docid": "e121febac6b62feaf11f317c057975ce", "text": "The dorsal horn of the spinal cord is the location of the first synapse in pain pathways, and as such, offers a very powerful target for regulation of nociceptive transmission by both local segmental and supraspinal mechanisms. Descending control of spinal nociception originates from many brain regions and plays a critical role in determining the experience of both acute and chronic pain. The earlier concept of descending control as an \"analgesia system\" is now being replaced with a more nuanced model in which pain input is prioritized relative to other competing behavioral needs and homeostatic demands. Descending control arises from a number of supraspinal sites, including the midline periaqueductal gray-rostral ventromedial medulla (PAG-RVM) system, and the more lateral and caudal dorsal reticular nucleus (DRt) and ventrolateral medulla (VLM). Inhibitory control from the PAG-RVM system preferentially suppresses nociceptive inputs mediated by C-fibers, preserving sensory-discriminative information conveyed by more rapidly conducting A-fibers. Analysis of the circuitry within the RVM reveals that the neural basis for bidirectional control from the midline system is two populations of neurons, ON-cells and OFF-cells, that are differentially recruited by higher structures important in fear, illness and psychological stress to enhance or inhibit pain. Dynamic shifts in the balance between pain inhibiting and facilitating outflows from the brainstem play a role in setting the gain of nociceptive processing as dictated by behavioral priorities, but are also likely to contribute to pathological pain states.", "title": "" }, { "docid": "c253083ab44c842819059ad64781d51d", "text": "RGB-D data is getting ever more interest from the research community as both cheap cameras appear in the market and the applications of this type of data become more common. A current trend in processing image data is the use of convolutional neural networks (CNNs) that have consistently beat competition in most benchmark data sets. In this paper we investigate the possibility of transferring knowledge between CNNs when processing RGB-D data with the goal of both improving accuracy and reducing training time. We present experiments that show that our proposed approach can achieve both these goals.", "title": "" }, { "docid": "e1885f9c373c355a4df9307c6d90bf83", "text": "Ricinulei possess movable, slender pedipalps with small chelae. When ricinuleids walk, they occasionally touch the soil surface with the tips of their pedipalps. This behavior is similar to the exploration movements they perform with their elongated second legs. We studied the distal areas of the pedipalps of the cavernicolous Mexican species Pseudocellus pearsei with scanning and transmission electron microscopy. Five different surface structures are characteristic for the pedipalps: (1) slender sigmoidal setae with smooth shafts resembling gustatory terminal pore single-walled (tp-sw) sensilla; (2) conspicuous long, mechanoreceptive slit sensilla; (3) a single, short, clubbed seta inside a deep pit representing a no pore single walled (np-sw) sensillum; (4) a single pore organ containing one olfactory wall pore single-walled (wp-sw) sensillum; and (5) gustatory terminal pore sensilla in the fingers of the pedipalp chela. Additionally, the pedipalps bear sensilla which also occur on the other appendages. With this sensory equipment, the pedipalps are highly effective multimodal short range sensory organs which complement the long range sensory function of the second legs. In order to present the complete sensory equipment of all appendages of the investigated Pseudocellus a comparative overview is provided.", "title": "" }, { "docid": "ad86262394b1633243ae44d1f43c1e68", "text": "OBJECTIVE\nTo study dimensional alterations of the alveolar ridge that occurred following tooth extraction as well as processes of bone modelling and remodelling associated with such change.\n\n\nMATERIAL AND METHODS\nTwelve mongrel dogs were included in the study. In both quadrants of the mandible incisions were made in the crevice region of the 3rd and 4th premolars. Minute buccal and lingual full thickness flaps were elevated. The four premolars were hemi-sected. The distal roots were removed. The extraction sites were covered with the mobilized gingival tissue. The extractions of the roots and the sacrifice of the dogs were staggered in such a manner that all dogs contributed with sockets representing 1, 2, 4 and 8 weeks of healing. The animals were sacrificed and tissue blocks containing the extraction socket were dissected, decalcified in EDTA, embedded in paraffin and cut in the buccal-lingual plane. The sections were stained in haematoxyline-eosine and examined in the microscope.\n\n\nRESULTS\nIt was demonstrated that marked dimensional alterations occurred during the first 8 weeks following the extraction of mandibular premolars. Thus, in this interval there was a marked osteoclastic activity resulting in resorption of the crestal region of both the buccal and the lingual bone wall. The reduction of the height of the walls was more pronounced at the buccal than at the lingual aspect of the extraction socket. The height reduction was accompanied by a \"horizontal\" bone loss that was caused by osteoclasts present in lacunae on the surface of both the buccal and the lingual bone wall.\n\n\nCONCLUSIONS\nThe resorption of the buccal/lingual walls of the extraction site occurred in two overlapping phases. During phase 1, the bundle bone was resorbed and replaced with woven bone. Since the crest of the buccal bone wall was comprised solely of bundle this modelling resulted in substantial vertical reduction of the buccal crest. Phase 2 included resorption that occurred from the outer surfaces of both bone walls. The reason for this additional bone loss is presently not understood.", "title": "" }, { "docid": "cbc9e0641caea9af6d75a94de26e09df", "text": "At present, spatio-temporal action detection in the video is still a challenging problem, considering the complexity of the background, the variety of the action or the change of the viewpoint in the unconstrained environment. Most of current approaches solve the problem via a two-step processing: first detecting actions at each frame; then linking them, which neglects the continuity of the action and operates in an offline and batch processing manner. In this paper, we attempt to build an online action detection model that introduces the spatio-temporal coherence existed among action regions when performing action category inference and position localization. Specifically, we seek to represent the spatio-temporal context pattern via establishing an encoder-decoder model based on the convolutional recurrent network. The model accepts a video snippet as input and encodes the dynamic information of the action in the forward pass. During the backward pass, it resolves such information at each time instant for action detection via fusing the current static or motion cue. Additionally, we propose an incremental action tube generation algorithm, which accomplishes action bounding-boxes association, action label determination and the temporal trimming in a single pass. Our model takes in the appearance, motion or fused signals as input and is tested on two prevailing datasets, UCF-Sports and UCF-101. The experiment results demonstrate the effectiveness of our method which achieves a performance superior or comparable to compared existing approaches.", "title": "" }, { "docid": "607607fe478aa93549b8af7748d93505", "text": "In recent few years, the antenna and sensor communities have witnessed a considerable integration of radio frequency identification (RFID) tag antennas and sensors because of the impetus provided by internet of things (IoT) and cyber-physical systems (CPS). Such types of sensor can find potential applications in structural health monitoring (SHM) because of their passive, wireless, simple, compact size, and multimodal nature, particular in large scale infrastructures during their lifecycle. The big data from these ubiquitous sensors are expected to generate a big impact for intelligent monitoring. A remarkable number of scientific papers demonstrate the possibility that objects can be remotely tracked and intelligently monitored for their physical/chemical/mechanical properties and environment conditions. Most of the work focuses on antenna design, and significant information has been generated to demonstrate feasibilities. Further information is needed to gain deep understanding of the passive RFID antenna sensor systems in order to make them reliable and practical. Nevertheless, this information is scattered over much literature. This paper is to comprehensively summarize and clearly highlight the challenges and state-of-the-art methods of passive RFID antenna sensors and systems in terms of sensing and communication from system point of view. Future trends are also discussed. The future research and development in UK are suggested as well.", "title": "" }, { "docid": "ecfa876df3c83b98ff6c85530e611548", "text": "Hand-crafted rules and reinforcement learning (RL) are two popular choices to obtain dialogue policy. The rule-based policy is often reliable within predefined scope but not self-adaptable, whereas RL is evolvable with data but often suffers from a bad initial performance. We employ a companion learning framework to integrate the two approaches for on-line dialogue policy learning, in which a predefined rule-based policy acts as a teacher and guides a data-driven RL system by giving example actions as well as additional rewards. A novel agent-aware dropout Deep Q-Network (AAD-DQN) is proposed to address the problem of when to consult the teacher and how to learn from the teacher’s experiences. AADDQN, as a data-driven student policy, provides (1) two separate experience memories for student and teacher, (2) an uncertainty estimated by dropout to control the timing of consultation and learning. Simulation experiments showed that the proposed approach can significantly improve both safety and efficiency of on-line policy optimization compared to other companion learning approaches as well as supervised pre-training using static dialogue corpus.", "title": "" }, { "docid": "d0e5ddcc0aa85ba6a3a18796c335dcd2", "text": "A novel planar end-fire circularly polarized (CP) complementary Yagi array antenna is proposed. The antenna has a compact and complementary structure, and exhibits excellent properties (low profile, single feed, broadband, high gain, and CP radiation). It is based on a compact combination of a pair of complementary Yagi arrays with a common driven element. In the complementary structure, the vertical polarization is contributed by a microstrip patch Yagi array, while the horizontal polarization is yielded by a strip dipole Yagi array. With the combination of the two orthogonally polarized Yagi arrays, a CP antenna with high gain and wide bandwidth is obtained. With a profile of <inline-formula> <tex-math notation=\"LaTeX\">$0.05\\lambda _{\\mathrm{0}}$ </tex-math></inline-formula> (3 mm), the antenna has a gain of about 8 dBic, an impedance bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\vert S_{11}\\vert < -10 $ </tex-math></inline-formula> dB) of 13.09% (4.57–5.21 GHz) and a 3-dB axial-ratio bandwidth of 10.51% (4.69–5.21 GHz).", "title": "" }, { "docid": "179fcdcb00e7d241321b06dd06fc5f9f", "text": "The ever increasing activity in social networks is mainly manifested by a growing stream of status updating or microblogging. The massive stream of updates emphasizes the need for accurate and efficient clustering of short messages on a large scale. Applying traditional clustering techniques is both inaccurate and inefficient due to sparseness. This paper presents an accurate and efficient algorithm for clustering Twitter tweets. We break the clustering task into two distinctive tasks/stages: (1) batch clustering of user annotated data, and (2) online clustering of a stream of tweets. In the first stage we rely on the habit of ‘tagging’, common in social media streams (e.g. hashtags), thus the algorithm can bootstrap on the tags for clustering of a large pool of hashtagged tweets. The stable clusters achieved in the first stage lend themselves for online clustering of a stream of (mostly) tagless messages. We evaluate our results against gold-standard classification and validate the results by employing multiple clustering evaluation measures (information theoretic, paired, F and greedy). We compare our algorithm to a number of other clustering algorithms and various types of feature sets. Results show that the algorithm presented is both accurate and efficient and can be easily used for large scale clustering of sparse messages as the heavy lifting is achieved on a sublinear number of documents.", "title": "" }, { "docid": "a12769e78530516b382fbc18fe4ec052", "text": "Roget’s Thesaurus has not been sufficiently appreciated in Natural Language Processing. We show that Roget's and WordNet are birds of a feather. In a few typical tests, we compare how the two resources help measure semantic similarity. One of the benchmarks is Miller and Charles’ list of 30 noun pairs to which human judges had assigned similarity measures. We correlate these measures with those computed by several NLP systems. The 30 pairs can be traced back to Rubenstein and Goodenough’s 65 pairs, which we have also studied. Our Roget’sbased system gets correlations of .878 for the smaller and .818 for the larger list of noun pairs; this is quite close to the .885 that Resnik obtained when he employed humans to replicate the Miller and Charles experiment. We further evaluate our measure by using Roget’s and WordNet to answer 80 TOEFL, 50 ESL and 300 Reader’s Digest questions: the correct synonym must be selected amongst a group of four words. Our system gets 78.75%, 82.00% and 74.33% of the questions respectively, better than any published results.", "title": "" }, { "docid": "a4f9d30c707237f3c3eacaab9c6be523", "text": "This paper presents the design of a novel power-divider circuit with an unequal power-dividing ratio. Unlike the conventional approaches, the characteristic impedance values of all the branch lines involved are independent of the dividing ratio. The electrical lengths of the line sections are the only circuit parameters to be adjusted. Moreover, the proposed structure does not require impedance transformers at the two output ports. By the introduction of a transmission line between one of the output ports and the isolation resistor, a flexible layout design with reduced parasitic coupling is achieved. For verification, the measured results of a 2 : 1 and a 4 : 1 power-divider circuits operating at 1 GHz are given. A relative bandwidth of over 20% is obtained based on a return loss and port isolation requirement of -20 dB.", "title": "" }, { "docid": "8745e21073db143341e376bad1f0afd7", "text": "The Virtual Reality (VR) user interface style allows natural hand and body motions to manipulate virtual objects in 3D environments using one or more 3D input devices. This style is best suited to application areas where traditional two-dimensional styles fall short, such as scienti c visualization, architectural visualization, and remote manipulation. Currently, the programming e ort required to produce a VR application is too large, and many pitfalls must be avoided in the creation of successful VR programs. In this paper we describe the Decoupled Simulation Model for creating successful VR applications, and a software system that embodies this model. The MR Toolkit simpli es the development of VR applications by providing standard facilities required by a wide range of VR user interfaces. These facilities include support for distributed computing, head-mounted displays, room geometry management, performance monitoring, hand input devices, and sound feedback. The MR Toolkit encourages programmers to structure their applications to take advantage of the distributed computing capabilities of workstation networks improving the application's performance. In this paper, the motivations and the architecture of the toolkit are outlined, the programmer's view is described, and a simple application is brie y described. CR", "title": "" }, { "docid": "401f93b2405bd54882fe876365195425", "text": "Previous approaches to training syntaxbased sentiment classification models required phrase-level annotated corpora, which are not readily available in many languages other than English. Thus, we propose the use of tree-structured Long Short-Term Memory with an attention mechanism that pays attention to each subtree of the parse tree. Experimental results indicate that our model achieves the stateof-the-art performance in a Japanese sentiment classification task.", "title": "" }, { "docid": "ebb43198da619d656c068f2ab1bfe47f", "text": "Remote data integrity checking (RDIC) enables a server to prove to an auditor the integrity of a stored file. It is a useful technology for remote storage such as cloud storage. The auditor could be a party other than the data owner; hence, an RDIC proof is based usually on publicly available information. To capture the need of data privacy against an untrusted auditor, Hao et al. formally defined “privacy against third party verifiers” as one of the security requirements and proposed a protocol satisfying this definition. However, we observe that all existing protocols with public verifiability supporting data update, including Hao et al.’s proposal, require the data owner to publish some meta-data related to the stored data. We show that the auditor can tell whether or not a client has stored a specific file and link various parts of those files based solely on the published meta-data in Hao et al.’s protocol. In other words, the notion “privacy against third party verifiers” is not sufficient in protecting data privacy, and hence, we introduce “zero-knowledge privacy” to ensure the third party verifier learns nothing about the client’s data from all available information. We enhance the privacy of Hao et al.’s protocol, develop a prototype to evaluate the performance and perform experiment to demonstrate the practicality of our proposal.", "title": "" }, { "docid": "990ee920895672c2b8b05bc6cf4fad3f", "text": "The world market of e-scooter is expected to experiment an increase of 15% in Western Europe between 2015 and 2025. In order to push this growth it is needed to develop new low-cost more efficient and reliable drives with high torque to weight ratio. In this paper a new axial-flux switched reluctance motor is proposed in order to accomplish this goal. The motor is constituted by a stator sandwiched by two rotors in which the ferromagnetic parts are made of soft magnetic composites. It has a new disposition of the stator and the rotor poles and shorter flux paths Simulations have demonstrated that the proposed axial-flux switched reluctance motor drive is able to meet the requirements of an e-scooter.", "title": "" } ]
scidocsrr
d0fae7c41039dd051f5e4be53bc06b64
Deep-Learning Convolutional Neural Networks for scattered shrub detection with Google Earth Imagery
[ { "docid": "4bec71105c8dca3d0b48e99cdd4e809a", "text": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.", "title": "" }, { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" } ]
[ { "docid": "b10a0f8d888d4ecfc0e0d154ae7416dc", "text": "The purpose of this study was to investigate the differences in the viscoelastic properties of human tendon structures (tendon and aponeurosis) in the medial gastrocnemius muscle between men (n=16) and women (n=13). The elongation of the tendon and aponeurosis of the medial gastrocnemius muscle was measured directly by ultrasonography, while the subjects performed ramp isometric plantar flexion up to the voluntary maximum, followed by a ramp relaxation. The relationship between the estimated muscle force (Fm) and tendon elongation (L) during the ascending phase was fitted to a linear regression, the slope of which was defined as stiffness. The percentage of the area within the Fm-L loop to the area beneath the curve during the ascending phase was calculated as hysteresis. The L values at force production levels beyond 50 N were significantly greater for women than for men. The maximum strain (100×ΔL/initial tendon length) was significantly greater in women [9.5 (1.1)%] than in men [8.1 (1.6)%]. The stiffness and Young's modulus were significantly lower in women [16.5 (3.4) N/mm, 277 (25) MPa] than in men [25.9 (7.0) N/mm, 356 (32) MPa]. Furthermore, the hysteresis was significantly lower in women [11.1 (5.9)%] than in men [18.7 (8.5)%, P=0.048]. These results suggest that there are gender differences in the viscoelastic properties of tendon structures and that these might in part account for previously observed performance differences between the genders.", "title": "" }, { "docid": "24c6f0454bad7506a600483434914be0", "text": "Query answers from on-line databases can easily be corrupted by hackers or malicious database publishers. Thus it is important to provide mechanisms which allow clients to trust the results from on-line queries. Authentic publication allows untrusted publishers to answer securely queries from clients on behalf of trusted off-line data owners. Publishers validate answers using hard-to-forge verification objects VOs), which clients can check efficiently. This approach provides greater scalability, by making it easy to add more publishers, and better security, since on-line publishers do not need to be trusted. To make authentic publication attractive, it is important for the VOs to be small, efficient to compute, and efficient to verify. This has lead researchers to develop independently several different schemes for efficient VO computation based on specific data structures. Our goal is to develop a unifying framework for these disparate results, leading to a generalized security result. In this paper we characterize a broad class of data structures which we call Search DAGs, and we develop a generalized algorithm for the construction of VOs for Search DAGs. We prove that the VOs thus constructed are secure, and that they are efficient to compute and verify. We demonstrate how this approach easily captures existing work on simple structures such as binary trees, multi-dimensional range trees, tries, and skip lists. Once these are shown to be Search DAGs, the requisite security and efficiency results immediately follow from our general theorems. Going further, we also use Search DAGs to produce and prove the security of authenticated versions of two complex data models for efficient multi-dimensional range searches. This allows efficient VOs to be computed (size O(log N + T)) for typical one- and two-dimensional range queries, where the query answer is of size T and the database is of size N. We also show I/O-efficient schemes to construct the VOs. For a system with disk blocks of size B, we answer one-dimensional and three-sided range queries and compute the VOs with O(logB N + T/B) I/O operations using linear size data structures.", "title": "" }, { "docid": "2b18aa800c4251e8cd8fbe39614eda4a", "text": "We consider the problem of finding small distance-preserving subgraphs of undirected, unweighted interval graphs with k terminal vertices. We prove the following results. 1. Finding an optimal distance-preserving subgraph is NP-hard for general graphs. 2. Every interval graph admits a subgraph with O(k) branching vertices that approximates pairwise terminal distances up to an additive term of +1. 3. There exists an interval graph Gint for which the +1 approximation is necessary to obtain the O(k) upper bound on the number of branching vertices. In particular, any distance-preserving subgraph of Gint has Ω(k log k) branching vertices. 4. Every interval graph admits a distance-preserving subgraph with O(k log k) branching vertices, i.e. the Ω(k log k) lower bound for interval graphs is tight. 5. There exists an interval graph such that every optimal distance-preserving subgraph of it has O(k) branching vertices and Ω(k log k) branching edges, thereby providing a separation between branching vertices and branching edges. The O(k) bound for distance-approximating subgraphs follows from a näıve analysis of shortest paths in interval graphs. Gint is constructed using bit-reversal permutation matrices. The O(k log k) bound for distance-preserving subgraphs uses a divide-and-conquer approach. Finally, the separation between branching vertices and branching edges employs Hansel’s lemma [Han64] for graph covering.", "title": "" }, { "docid": "a560892a1cd4fdefc3271d426a3ff936", "text": "We present a variant of hierarchical marking menus where items are selected using a series of inflection-free simple marks, rather than the single \"zig-zag\" compound mark used in the traditional design. Theoretical analysis indicates that this simple mark approach has the potential to significantly increase the number of items in a marking menu that can be selected efficiently and accurately. A user experiment is presented that compares the simple and compound mark techniques. Results show that the simple mark technique allows for significantly more accurate and faster menu selections overall, but most importantly also in menus with a large number of items where performance of the compound mark technique is particularly poor. The simple mark technique also requires significantly less physical input space to perform the selections, making it particularly suitable for small footprint pen-based input devices. Visual design alternatives are also discussed.", "title": "" }, { "docid": "bfe58868ab05a6ba607ef1f288d37f33", "text": "There is much debate as to whether online offenders are a distinct group of sex offenders or if they are simply typical sex offenders using a new technology. A meta-analysis was conducted to examine the extent to which online and offline offenders differ on demographic and psychological variables. Online offenders were more likely to be Caucasian and were slightly younger than offline offenders. In terms of psychological variables, online offenders had greater victim empathy, greater sexual deviancy, and lower impression management than offline offenders. Both online and offline offenders reported greater rates of childhood physical and sexual abuse than the general population. Additionally, online offenders were more likely to be Caucasian, younger, single, and unemployed compared with the general population. Many of the observed differences can be explained by assuming that online offenders, compared with offline offenders, have greater self-control and more psychological barriers to acting on their deviant interests.", "title": "" }, { "docid": "d7e2654767d1178871f3f787f7616a94", "text": "We propose a nonparametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject. Such label fusion methods have been shown to yield accurate segmentation, since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures. To the best of our knowledge, this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework. We conduct two sets of experiments to validate the proposed methods. In the first set of experiments, we use 39 brain MRI scans - with manually segmented white matter, cerebral cortex, ventricles and subcortical structures - to compare different label fusion algorithms and the widely-used FreeSurfer whole-brain segmentation tool. Our results indicate that the proposed framework yields more accurate segmentation than FreeSurfer and previous label fusion algorithms. In a second experiment, we use brain MRI scans of 282 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal volume changes in a study of aging and Alzheimer's Disease.", "title": "" }, { "docid": "838938383b161287337d1867c5528d9d", "text": "We propose a framework for collaborative filtering based on Restricted Boltzmann Machines (RBM), which extends previous RBMbased approaches in several important directions. First, while previous RBM research has focused on modeling the correlation between item ratings, we model both user-user and item-item correlations in a unified hybrid non-IID framework. We further use real values in the visible layer as opposed to multinomial variables, thus taking advantage of the natural order between user-item ratings. Finally, we explore the potential of combining the original training data with data generated by the RBM-based model itself in a bootstrapping fashion. The evaluation on two MovieLens datasets (with 100K and 1M user-item ratings, respectively), shows that our RBM model rivals the best previouslyproposed approaches.", "title": "" }, { "docid": "5214f391d5b152f9809bec1f6f069d21", "text": "Abstract—Magnetic resonance imaging (MRI) is an important diagnostic imaging technique for the early detection of brain cancer. Brain cancer is one of the most dangerous diseases occurring commonly among human beings. The chances of survival can be increased if the cancer is detected at its early stage. MRI brain image plays a vital role in assisting radiologists to access patients for diagnosis and treatment. Studying of medical image by the Radiologist is not only a tedious and time consuming process but also accuracy depends upon their experience. So, the use of computer aided systems becomes very necessary to overcome these limitations. Even though several automated methods are available, still segmentation of MRI brain image remains as a challenging problem due to its complexity and there is no standard algorithm that can produce satisfactory results. In this review paper, various current methodologies of brain image segmentation using automated algorithms that are accurate and requires little user interaction are reviewed and their advantages, disadvantages are discussed. This review paper guides in combining two or more methods together to produce accurate results.", "title": "" }, { "docid": "48a476d5100f2783455fabb6aa566eba", "text": "Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.].", "title": "" }, { "docid": "22554a4716f348a6f43299f193d5534f", "text": "Unsolicited bulk e-mail, or SPAM, is a means to an end. For virtually all such messages, the intent is to attract the recipient into entering a commercial transaction — typically via a linked Web site. While the prodigious infrastructure used to pump out billions of such solicitations is essential, the engine driving this process is ultimately th e “point-of-sale” — the various money-making “scams” that extract value from Internet users. In the hopes of better understanding the business pressures exerted on spammers, this paper focuses squarely on the Internet infrastructure used to host and support such scams. We describe an opportunistic measurement technique called spamscatterthat mines emails in real-time, follows the embedded link structure, and automatically clusters the destination Web sites using image shinglingto capture graphical similarity between rendered sites. We have implemented this approach on a large real-time spam feed (over 1M messages per week) and have identified and analyzed over 2,000 distinct scams on 7,000 distinct servers.", "title": "" }, { "docid": "3c0f6d8af7f005611773de7ab845c22f", "text": "We propose a method to decompose the changes in the wage distribution over a period of time in several factors contributing to those changes. The method is based on the estimation of marginal wage distributions consistent with a conditional distribution estimated by quantile regression as well as with any hypothesized distribution for the covariates. Comparing the marginal distributions implied by different distributions for the covariates, one is then able to perform counterfactual exercises. The proposed methodology enables the identification of the sources of the increased wage inequality observed in most countries. Specifically, it decomposes the changes in the wage distribution over a period of time into several factors contributing to those changes, namely by discriminating between changes in the characteristics of the working population and changes in the returns to these characteristics. We apply this methodology to Portuguese data for the period 1986–1995, and find that the observed increase in educational levels contributed decisively towards greater wage inequality. Copyright  2005 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "2d774ec62cdac08997cb8b86e73fe015", "text": "This paper focuses on modeling resolving and simulations of the inverse kinematics of an anthropomorphic redundant robotic structure with seven degrees of freedom and a workspace similar to human arm. Also the kinematical model and the kinematics equations of the robotic arm are presented. A method of resolving the redundancy of seven degrees of freedom robotic arm is presented using Fuzzy Logic toolbox from MATLAB®.", "title": "" }, { "docid": "e682f1b64d6eae69252ea2298f035ac6", "text": "Objective\nPatient notes in electronic health records (EHRs) may contain critical information for medical investigations. However, the vast majority of medical investigators can only access de-identified notes, in order to protect the confidentiality of patients. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) defines 18 types of protected health information that needs to be removed to de-identify patient notes. Manual de-identification is impractical given the size of electronic health record databases, the limited number of researchers with access to non-de-identified notes, and the frequent mistakes of human annotators. A reliable automated de-identification system would consequently be of high value.\n\n\nMaterials and Methods\nWe introduce the first de-identification system based on artificial neural networks (ANNs), which requires no handcrafted features or rules, unlike existing systems. We compare the performance of the system with state-of-the-art systems on two datasets: the i2b2 2014 de-identification challenge dataset, which is the largest publicly available de-identification dataset, and the MIMIC de-identification dataset, which we assembled and is twice as large as the i2b2 2014 dataset.\n\n\nResults\nOur ANN model outperforms the state-of-the-art systems. It yields an F1-score of 97.85 on the i2b2 2014 dataset, with a recall of 97.38 and a precision of 98.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with a recall of 99.25 and a precision of 99.21.\n\n\nConclusion\nOur findings support the use of ANNs for de-identification of patient notes, as they show better performance than previously published systems while requiring no manual feature engineering.", "title": "" }, { "docid": "f833db8a1e61634f1ff20be721bd7c64", "text": "Low-rank modeling has many important applications in computer vision and machine learning. While the matrix rank is often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better empirical performance. However, the resulting optimization problem is much more challenging. Recent state-of-the-art requires an expensive full SVD in each iteration. In this paper, we show that for many commonly-used nonconvex low-rank regularizers, the singular values obtained from the proximal operator can be automatically threshold. This allows the proximal operator to be efficiently approximated by the power method. We then develop a fast proximal algorithm and its accelerated variant with inexact proximal step. It can be guaranteed that the squared distance between consecutive iterates converges at a rate of , where is the number of iterations. Furthermore, we show the proposed algorithm can be parallelized, and the resultant algorithm achieves nearly linear speedup w.r.t. the number of threads. Extensive experiments are performed on matrix completion and robust principal component analysis. Significant speedup over the state-of-the-art is observed.", "title": "" }, { "docid": "264dbf645418fc301b3633a280c3ad0d", "text": "Music prediction tasks range from predicting tags given a song or clip of audio, predicting the name of the artist, or predicting related songs given a song, clip, artist name or tag. That is, we are interested in every semantic relationship between the different musical concepts in our database. In realistically sized databases, the number of songs is measured in the hundreds of thousands or more, and the number of artists in the tens of thousands or more, providing a considerable challenge to standard machine learning techniques. In this work, we propose a method that scales to such datasets which attempts to capture the semantic similarities between the database items by modeling audio, artist names, and tags in a single low-dimensional semantic embedding space. This choice of space is learnt by optimizing the set of prediction tasks of interest jointly using multi-task learning. Our single model learnt by training on the joint objective function is shown experimentally to have improved accuracy over training on each task alone. Our method also outperforms the baseline methods tried and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where the semantic space captures well the similarities of interest.", "title": "" }, { "docid": "b6bbf7affff4c6a29e964141302daf56", "text": "Existing natural media painting simulations have produced high-quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists. Finally, we present a detailed analysis of the different vector-rendering technologies available.", "title": "" }, { "docid": "821ce57b64025cbf01310ba83f46d091", "text": "Machine learning techniques are widely used in the domain of Natural Language Processing (NLP) and Computer Vision (CV), In order to capture complex and non-linear features deeper machine learning architectures become more and more popular. A lot of the state of art performance have been reported by employing deep learning techniques. Convolutional Neural Network (CNN) is one variant of deep learning architectures which has received intense attention in recent years. CNN is inspired from the domain of biology, which tries to mimic the way of how signal are processed in human brain. CNN is type of feed forward artificial neural network which are constructed by multiple layers. Signals are passed through these layers with non-linear activation functions. Within each layer, there are a lot of independent node to process the signal in different regions or aspects. CNN has achieved great success in sentence modeling, image recognition and feature detection. In this paper, we introduce the motivation, intuition, architectures and algorithm of CNN. In particular, we discuss several recent achievements of CNN in NLP and CV.", "title": "" }, { "docid": "976b0f7f2fd2d1b52f3bec40d51df87a", "text": "The availability of multicore processors across a wide range of computing platforms has created a strong demand for software frameworks that can harness these resources. This paper overviews the Cilk++ programming environment, which incorporates a compiler, a runtime system, and a race-detection tool. The Cilk++ runtime system guarantees to load-balance computations effectively. To cope with legacy codes containing global variables, Cilk++ provides a \"hyperobject\" library which allows races on nonlocal variables to be mitigated without lock contention or substantial code restructuring.", "title": "" }, { "docid": "af4303c27b01b865d85b66c936f669bd", "text": "The healthcare industry is producing massive amounts of data which need to be mine to discover hidden information for effective prediction, exploration, diagnosis and decision making. Machine learning techniques can help and provides medication to handle this circumstances. Moreover, Chronic Kidney Disease prediction is one of the most central problems in medical decision making because it is one of the leading cause of death. So, automated tool for early prediction of this disease will be useful to cure. In this study, the experiments were conducted for the prediction task of Chronic Kidney Disease obtained from UCI Machine Learning repository using the six machine learning algorithms, namely: Random Forest (RF) classifiers, Sequential Minimal Optimization (SMO), NaiveBayes, Radial Basis Function (RBF) and Multilayer Perceptron Classifier (MLPC) and SimpleLogistic (SLG).The feature selected is used for training and testing of each classifier individually with ten-fold cross validation. The results obtained show that the RF classifier outperforms other classifiers in terms of Area under the ROC curve (AUC), accuracy and MCC with values 1.0, 1.0 and 1.0 respectively.", "title": "" } ]
scidocsrr
433a2fdc6cd0cd35260fed791d2593f3
Gaussian Process Regression-Based Video Anomaly Detection and Localization With Hierarchical Feature Representation
[ { "docid": "d4fa5b9d4530b12a394c1e98ea2793b1", "text": "Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch-and-bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the chi2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.", "title": "" }, { "docid": "b9a893fb526955b5131860a1402e2f7c", "text": "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.", "title": "" }, { "docid": "ea84c28e02a38caff14683681ea264d7", "text": "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression. While local anomaly is typically detected as a 3D pattern matching problem, we are more interested in global anomaly that involves multiple normal events interacting in an unusual manner such as car accident. To simultaneously detect local and global anomalies, we formulate the extraction of normal interactions from training video as the problem of efficiently finding the frequent geometric relations of the nearby sparse spatio-temporal interest points. A codebook of interaction templates is then constructed and modeled using Gaussian process regression. A novel inference method for computing the likelihood of an observed interaction is also proposed. As such, our model is robust to slight topological deformations and can handle the noise and data unbalance problems in the training data. Simulations show that our system outperforms the main state-of-the-art methods on this topic and achieves at least 80% detection rates based on three challenging datasets.", "title": "" } ]
[ { "docid": "b5e0faba5be394523d10a130289514c2", "text": "Child neglect results from either acts of omission or of commission. Fatalities from neglect account for 30% to 40% of deaths caused by child maltreatment. Deaths may occur from failure to provide the basic needs of infancy such as food or medical care. Medical care may also be withheld because of parental religious beliefs. Inadequate supervision may contribute to a child's injury or death through adverse events involving drowning, fires, and firearms. Recognizing the factors contributing to a child's death is facilitated by the action of multidisciplinary child death review teams. As with other forms of child maltreatment, prevention and early intervention strategies are needed to minimize the risk of injury and death to children.", "title": "" }, { "docid": "5523f345b8509e8636374d14ac0cf9de", "text": "In this paper we discuss and create a MQTT based Secured home automation system, by using mentioned sensors and using Raspberry pi B+ model as the network gateway, here we have implemented MQTT Protocol for transferring & receiving sensor data and finally getting access to those sensor data, also we have implemented ACL (access control list) to provide encryption method for the data and finally monitoring those data on webpage or any network devices. R-pi has been used as a gateway or the main server in the whole system, which has various sensor connected to it via wired or wireless communication.", "title": "" }, { "docid": "d7f0db0754afa0701c7f46c48f4844e0", "text": "One drawback of classical parallel robots is their limited workspace, mainly due to the limitation of the stroke of linear actuators. Parallel wire robots (also known as Tendon-based Steward platforms or cable robots) face this problem through substitution of the actuators by wires (or tendons, cables, . . .). Tendon-based Steward platforms have been proposed in (Landsberger & Sheridan, 1985). Although these robots share the basic concepts of classical parallel robots, there are some major differences:", "title": "" }, { "docid": "0dc3a616cf2d9c4dac08cbe94bbbed0e", "text": "Digital news with a variety topics is abundant on the internet. The problem is to classify news based on its appropriate category to facilitate user to find relevant news rapidly. Classifier engine is used to split any news automatically into the respective category. This research employs Support Vector Machine (SVM) to classify Indonesian news. SVM is a robust method to classify binary classes. The core processing of SVM is in the formation of an optimum separating plane to separate the different classes. For multiclass problem, a mechanism called one against one is used to combine the binary classification result. Documents were taken from the Indonesian digital news site, www.kompas.com. The experiment showed a promising result with the accuracy rate of 85%. This system is feasible to be implemented on Indonesian news classification. Keywords—classification, Indonesian news, text processing, support vector machine", "title": "" }, { "docid": "b99efb63e8016c7f5ab09e868ae894da", "text": "The popular bag of words approach for action recognition is based on the classifying quantized local features density. This approach focuses excessively on the local features but discards all information about the interactions among them. Local features themselves may not be discriminative enough, but combined with their contexts, they can be very useful for the recognition of some actions. In this paper, we present a novel representation that captures contextual interactions between interest points, based on the density of all features observed in each interest point's mutliscale spatio-temporal contextual domain. We demonstrate that augmenting local features with our contextual feature significantly improves the recognition performance.", "title": "" }, { "docid": "78eecb90bad21916621687d8eac0e557", "text": "AIM\nThe aim of this paper is to present the Australian Spasticity Assessment Scale (ASAS) and to report studies of its interrater reliability. The ASAS identifies the presence of spasticity by confirming a velocity-dependent increased response to rapid passive movement and quantifies it using an ordinal scale.\n\n\nMETHOD\nThe rationale and procedure for the ASAS is described. Twenty-two participants with spastic CP (16 males; age range 1y 11mo-15y 3mo) who had not had botulinum neurotoxin-A within 4 months, or bony or soft tissue surgery within 12 months, were recruited from the spasticity management clinic of a tertiary paediatric teaching hospital. Fourteen muscles in each child were assessed by each of three experienced independent raters. ASAS was recorded for all muscles. Interrater reliability was calculated using the weighted kappa statistic (quadratic weighting; κqw) for individual muscles, for upper limbs, for lower limbs, and between raters.\n\n\nRESULTS\nThe weighted kappa ranged between 0.75 and 0.92 for individual muscle groups and was 0.87 between raters.\n\n\nINTERPRETATION\nThe ASAS complies with the definition of spasticity and is clinically feasible in paediatric settings. Our estimates of interrater reliability for the ASAS exceed that of the most commonly used spasticity scoring systems.", "title": "" }, { "docid": "f7ec4acfd6c4916f3fec0dfa26db558c", "text": "In the real-world online social networks, users tend to form different social communities. Due to its extensive applications, community detection in online social networks has been a hot research topic in recent years. In this chapter, we will focus on introducing the social community detection problem in online social networks. To be more specific, we will take the hard community detection problem as an example to introduce the existing models proposed for conventional (one single) homogeneous social network, and the recent broad learning based (multiple aligned) heterogeneous social networks respectively. Key Word: Community Detection; Social Media; Aligned Heterogeneous Networks; Broad Learning", "title": "" }, { "docid": "4d56f134c2e2a597948bcf9b1cf37385", "text": "This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http://sscnet.cs.princeton.edu.", "title": "" }, { "docid": "da4b970f53ec46a6d2e3ca03086e110d", "text": "In this communication, a novel filtering antenna is proposed by utilizing active frequency selective surface (FSS), which can simultaneously achieve filtering and beam steering function. The FSS unit is composed of a metallic rectangular ring and a patch, with a pair of microwave varactor diodes inserted in between along incident electric field polarization direction. Transmission phase of the emitted wave can be tuned by changing the bias voltage applied to the varactor diodes. Through different configurations of the bias voltages, we can obtain the gradient phase distribution of the emitted wave along E- and H-plane. This active FSS is then fabricated and utilized as a radome above a conventional horn antenna to demonstrate its ability of steering the beam radiated from the horn. The experimental results agree well with the simulated ones, which show that the horn antenna with the active FSS can realize beam steering in both E- and H-plane in a range of ±30° at 5.3 GHz with a bandwidth of 180 MHz.", "title": "" }, { "docid": "7eec1e737523dc3b78de135fc71b058f", "text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches", "title": "" }, { "docid": "bfc85b95287e4abc2308849294384d1e", "text": "& 10 0 YE A RS A G O 50 YEARS AGO A Congress was held in Singapore during December 2–9 to celebrate “the Centenary of the formulation of the theory of Evolution by Charles Darwin and Alfred Russel Wallace and the Bicentenary of the publication of the tenth edition of the ‘Systema Naturae’ by Linnaeus”. It was particularly fitting that this Congress should have been held in Singapore for ... it directed special attention to the work of Wallace, who was one of the greatest biologists ever to have worked in south-east Asia ... Prof. Haldane then delivered his presidential address ... The president emphasised the stimuli gained by Linnaeus, Darwin and Wallace through working in peripheral areas where lack of knowledge was a challenge. He suggested that the next major biological advance may well come for similar reasons from peripheral places such as Singapore, or Calcutta, where this challenge still remains and where the lack of complex scientific apparatus drives biologists into different and long-neglected fields of research. From Nature 14 March 1959.", "title": "" }, { "docid": "c166ae2b9085cc4769438b1ca8ac8ee0", "text": "Texts in web pages, images and videos contain important clues for information indexing and retrieval. Most existing text extraction methods depend on the language type and text appearance. In this paper, a novel and universal method of image text extraction is proposed. A coarse-to-fine text location method is implemented. Firstly, a multi-scale approach is adopted to locate texts with different font sizes. Secondly, projection profiles are used in location refinement step. Color-based k-means clustering is adopted in text segmentation. Compared to grayscale image which is used in most existing methods, color image is more suitable for segmentation based on clustering. It treats corner-points, edge-points and other points equally so that it solves the problem of handling multilingual text. It is demonstrated in experimental results that best performance is obtained when k is 3. Comparative experimental results on a large number of images show that our method is accurate and robust in various conditions.", "title": "" }, { "docid": "8bc0edddcfac4aabb7fcf0fe4ed8035b", "text": "Nowadays, there are many taxis traversing around the city searching for available passengers, but their hunts of passengers are not always efficient. To the dynamics of traffic and biased passenger distributions, current offline recommendations based on place of interests may not work well. In this paper, we define a new problem, global-optimal trajectory retrieving (GOTR), as finding a connected trajectory of high profit and high probability to pick up a passenger within a given time period in real-time. To tackle this challenging problem, we present a system, called HUNTS, based on the knowledge from both historical and online GPS data and business data. To achieve above objectives, first, we propose a dynamic scoring system to evaluate each road segment in different time periods by considering both picking-up rate and profit factors. Second, we introduce a novel method, called trajectory sewing, based on a heuristic method and the Skyline technique, to produce an approximate optimal trajectory in real-time. Our method produces a connected trajectory rather than several place of interests to avoid frequent next-hop queries. Third, to avoid congestion and other real-time traffic situations, we update the score of each road segment constantly via an online handler. Finally, we validate our system using a large-scale data of around 15,000 taxis in a large city in China, and compare the results with regular taxis' hunts and the state-of-the-art.", "title": "" }, { "docid": "2e3dcd4ba0dbcabb86c8716d73760028", "text": "Power transformers are one of the most critical devices in power systems. It is responsible for voltage conversion, power distribution and transmission, and provides power services. Therefore, the normal operation of the transformer is an important guarantee for the safe, reliable, high quality and economical operation of the power system. It is necessary to minimize and reduce the occurrence of transformer failure and accident. The on-line monitoring and fault diagnosis of power equipment is not only the prerequisite for realizing the predictive maintenance of equipment, but also the key to ensure the safe operation of equipment. Although the analysis of dissolved gas in transformer oil is an important means of transformer insulation monitoring, the coexistence of two kinds of faults, such as discharge and overheat, can lead to a lower positive rate of diagnosis. In this paper, we use the basic particle swarm optimization algorithm to optimize the BP neural network DGA method, select the typical oil in the oil as a neural network input, and then use the trained particle swarm algorithm to optimize the neural network for transformer fault type diagnosis. The results show that the method has a good classification effect, which can solve the problem of difficult to distinguish the faults of the transformer when the discharge and overheat coexist. The positive rate of fault diagnosis is high.", "title": "" }, { "docid": "44f2eaf0219f44a82a9967ec9a9d36cd", "text": "Two measures of social function designed for community studies of normal aging and mild senile dementia were evaluated in 195 older adults who underwent neurological, cognitive, and affective assessment. An examining and a reviewing neurologist and a neurologically trained nurse independently rated each on a Scale of Functional Capacity. Interrater reliability was high (examining vs. reviewing neurologist, r = .97; examining neurologist vs. nurse, tau b = .802; p less than .001 for both comparisons). Estimates correlated well with an established measure of social function and with results of cognitive tests. Alternate informants evaluated participants on the Functional Activities Questionnaire and the Instrumental Activities of Daily Living Scale. The Functional Activities Questionnaire was superior to the Instrumental Activities of Daily scores. Used alone as a diagnostic tool, the Functional Activities Questionnaire was more sensitive than distinguishing between normal and demented individuals.", "title": "" }, { "docid": "cd8c1c24d4996217c8927be18c48488f", "text": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.", "title": "" }, { "docid": "dd2e81d24584fe0684266217b732d881", "text": "In order to understand the role of titanium isopropoxide (TIPT) catalyst on insulation rejuvenation for water tree aged cables, dielectric properties and micro structure changes are investigated for the rejuvenated cables. Needle-shape defects are made inside cross-linked polyethylene (XLPE) cable samples to form water tree in the XLPE layer. The water tree aged samples are injected by the liquid with phenylmethyldimethoxy silane (PMDMS) catalyzed by TIPT for rejuvenation, and the breakdown voltage of the rejuvenated samples is significantly higher than that of the new samples. By the observation of scanning electronic microscope (SEM), the nano-TiO2 particles are observed inside the breakdown channels of the rejuvenated samples. Accordingly, the insulation performance of rejuvenated samples is significantly enhanced by the nano-TiO2 particles. Through analyzing the products of hydrolysis from TIPT, the nano-scale TiO2 particles are observed, and its micro-morphology is consistent with that observed inside the breakdown channels. According to the observation, the insulation enhancement mechanism is described. Therefore, the dielectric property of the rejuvenated cables is improved due to the nano-TiO2 produced by the hydrolysis from TIPT.", "title": "" }, { "docid": "b50ea06c20fb22d7060f08bc86d9d6ca", "text": "The advent of the Social Web has provided netizens with new tools for creating and sharing, in a time- and cost-efficient way, their contents, ideas, and opinions with virtually the millions of people connected to the World Wide Web. This huge amount of information, however, is mainly unstructured as specifically produced for human consumption and, hence, it is not directly machine-processable. In order to enable a more efficient passage from unstructured information to structured data, aspect-based opinion mining models the relations between opinion targets contained in a document and the polarity values associated with these. Because aspects are often implicit, however, spotting them and calculating their respective polarity is an extremely difficult task, which is closer to natural language understanding rather than natural language processing. To this end, Sentic LDA exploits common-sense reasoning to shift LDA clustering from a syntactic to a semantic level. Rather than looking at word co-occurrence frequencies, Sentic LDA leverages on the semantics associated with words and multi-word expressions to improve clustering and, hence, outperform state-of-the-art techniques for aspect extraction.", "title": "" }, { "docid": "cd8bd76ecebbd939400b4724499f7592", "text": "Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depthspecific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.", "title": "" } ]
scidocsrr
ad70520db9a7d7fe76954d9bcd1730e2
Many roads lead to Rome: mapping users' problem solving strategies
[ { "docid": "e05a7919e3e0333adef243694e7d50cb", "text": "WHEN the magician pulls the rabbit from the hat, the spectator can respond either with mystification or with curiosity. He can enjoy the surprise and the wonder of the unexplained (and perhaps inexplicable), or he can search for an explanation. Suppose curiosity is his main response—that he adopts a scientist's attitude toward the mystery. What questions should a scientific theory of magic answer? First, it should predict the performance of a magician handling specified tasks—producing a rabbit from a hat, say. It should explain how the production takes place, what processes are used, and what mechanisms perform those processes. It should predict the incidental phenomena that accompany the magic—the magician's patter and his pretty assistant—and the relation of these to the mystification process. It should show how changes in the attendant conditions—both changes \"inside\" the members of the audience and changes in the feat of magic—alter the magician's behavior. It should explain how specific and general magician's skills are learned, and what the magician \"has\" when he has learned them.", "title": "" } ]
[ { "docid": "3e9845c255b5e816741c04c4f7cf5295", "text": "This paper presents the packaging technology and the integrated antenna design for a miniaturized 122-GHz radar sensor. The package layout and the assembly process are shortly explained. Measurements of the antenna including the flip chip interconnect are presented that have been achieved by replacing the IC with a dummy chip that only contains a through-line. Afterwards, radiation pattern measurements are shown that were recorded using the radar sensor as transmitter. Finally, details of the fully integrated radar sensor are given, together with results of the first Doppler measurements.", "title": "" }, { "docid": "226fdcdd185b2686e11732998dca31a2", "text": "Blockchain has received much attention in recent years. This immense popularity has raised a number of concerns, scalability of blockchain systems being a common one. In this paper, we seek to understand how Ethereum, a well-established blockchain system, would respond to sharding. Sharding is a prevalent technique to increase the scalability of distributed systems. To understand how sharding would affect Ethereum, we model Ethereum blockchain as a graph and evaluate five methods to partition the graph. We assess methods using three metrics: the balance among shards, the number of transactions that would involve multiple shards, and the amount of data that would be relocated across shards upon repartitioning of the graph.", "title": "" }, { "docid": "4c54ccdc2c6219e185b701c75eb9e5b4", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Perceived development of psychological characteristics in Male and Female elite gymnasts Claire Calmels, Fabienne D’Arripe-Longueville, Magaly Hars, Nadine Debois", "title": "" }, { "docid": "5b3ca1cc607d2e8f0394371f30d9e83a", "text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.", "title": "" }, { "docid": "b01028ef40b1fda74d0621c430ce9141", "text": "ETRI Journal, Volume 29, Number 2, April 2007 A novel low-voltage CMOS current feedback operational amplifier (CFOA) is presented. This realization nearly allows rail-to-rail input/output operations. Also, it provides high driving current capabilities. The CFOA operates at supply voltages of ±0.75 V with a total standby current of 304 μA. The circuit exhibits a bandwidth better than 120 MHz and a current drive capability of ±1 mA. An application of the CFOA to realize a new all-pass filter is given. PSpice simulation results using 0.25 μm CMOS technology parameters for the proposed CFOA and its application are given.", "title": "" }, { "docid": "fb4837a619a6b9e49ca2de944ec2314e", "text": "Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at “arbitrary” states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.", "title": "" }, { "docid": "fe97095f2af18806e7032176c6ac5d89", "text": "Targeted social engineering attacks in the form of spear phishing emails, are often the main gimmick used by attackers to infiltrate organizational networks and implant state-of-the-art Advanced Persistent Threats (APTs). Spear phishing is a complex targeted attack in which, an attacker harvests information about the victim prior to the attack. This information is then used to create sophisticated, genuine-looking attack vectors, drawing the victim to compromise confidential information. What makes spear phishing different, and more powerful than normal phishing, is this contextual information about the victim. Online social media services can be one such source for gathering vital information about an individual. In this paper, we characterize and examine a true positive dataset of spear phishing, spam, and normal phishing emails from Symantec's enterprise email scanning service. We then present a model to detect spear phishing emails sent to employees of 14 international organizations, by using social features extracted from LinkedIn. Our dataset consists of 4,742 targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack emails sent to 5,912 non victims; and publicly available information from their LinkedIn profiles. We applied various machine learning algorithms to this labeled data, and achieved an overall maximum accuracy of 97.76% in identifying spear phishing emails. We used a combination of social features from LinkedIn profiles, and stylometric features extracted from email subjects, bodies, and attachments. However, we achieved a slightly better accuracy of 98.28% without the social features. Our analysis revealed that social features extracted from LinkedIn do not help in identifying spear phishing emails. To the best of our knowledge, this is one of the first attempts to make use of a combination of stylometric features extracted from emails, and social features extracted from an online social network to detect targeted spear phishing emails.", "title": "" }, { "docid": "8159d3dea8c1a33c3a2c0500e4e00e88", "text": "Sclera blood veins have been investigated recently as a biometric trait which can be used in a recognition system. The sclera is the white and opaque outer protective part of the eye. This part of the eye has visible blood veins which are randomly distributed. This feature makes these blood veins a promising factor for eye recognition. The sclera has an advantage in that it can be captured using a visible-wavelength camera. Therefore, applications which may involve the sclera are wide ranging. The contribution of this paper is the design of a robust sclera recognition system with high accuracy. The system comprises of new sclera segmentation and occluded eye detection methods. We also propose an efficient method for vessel enhancement, extraction, and binarization. In the feature extraction and matching process stages, we additionally develop an efficient method, that is, orientation, scale, illumination, and deformation invariant. The obtained results using UBIRIS.v1 and UTIRIS databases show an advantage in terms of segmentation accuracy and computational complexity compared with state-of-the-art methods due to Thomas, Oh, Zhou, and Das.", "title": "" }, { "docid": "4bab29f0689f301683370e73fa045bcc", "text": "Over the past decade, the traditional purchasing and logistics functions have evolved into a broader strategic approach to materials and distribution management known as supply chain management. This research reviews the literature base and development of supply chain management from two separate paths that eventually merged into the modern era of a holistic and strategic approach to operations, materials and logistics management. In addition, this article attempts to clearly describe supply chain management since the literature is replete with buzzwords that address elements or stages of this new management philosophy. This article also discusses various supply chain management strategies and the conditions conducive to supply chain management. ( 2000 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "319bfee25d07faa5b2497102f765ad95", "text": "Mobile computing is where the future is. This theme is not far fetched. We all want anywhere anytime communication. Ubiquitous communication has been made possible in the recent years with the advent of mobile ad hoc networks. The benefits of ubiquitous connectivity not only makes our lives more comfortable but also helps businesses efficiently deploy and manage their resources. These infrastructureless networks that enable \" anywhere anytime \" information access pose several challenging issues. There are several issues in the design and realization of these networks. Mobility planning is tricky and needs to be designed more carefully. Keeping track of mobiles in the infrastructure, the problem more popularly known as the location management problem, is another key issue to be addressed. The load on the servers, for handling location updates and queries, needs to be balanced. Moreover, the operation needs to be robust due to a high probability of temporary or permanent unavailability of one or more of the intermediate nodes. The transport protocols also need to be robust as a high degree of interference and noise can be expected in such environments. Applications will have to designed to incorporate environment-specific features in order to make them more robust. We believe that satisfactory solutions to these problems are essential in order to create smart environments using ad hoc networking infrastructure. While medium access in wireless networks still remains an active research area due to the limited availability of wireless bandwidth, the absence of infrastructure makes the problem more challenging. Mobility, being one of the inherent properties of ad hoc networks, results in frequent changes in the network topology, making routing in such dynamic environments complex. In short, the presence of wireless medium, mobility, and lack of infrastructure makes the problem of routing and scheduling far more challenging in ad hoc networks. Providing services in such networks while guaranteeing the performance requirements specified by the users remains an interesting and active research area. This issue of MONET is dedicated to papers relating to this topic. These papers are selected from the papers published in The papers were revised and reviewed again. In this special issue, we have selected seven papers covering various aspects of routing, multicasting, and Quality-of-Service in these networks. The first paper by Tang, Correa and Gerla, \" Effects of Ad Hoc MAC Layer Medium Access Mechanisms under TCP \" , deals with the issues in medium access control …", "title": "" }, { "docid": "14658e1be562a01c1ba8338f5e87020b", "text": "This paper discusses a novel approach in developing a texture sensor emulating the major features of a human finger. The aim of this study is to realize precise and quantitative texture sensing. Three physical properties, roughness, softness, and friction are known to constitute texture perception of humans. The sensor is designed to measure the three specific types of information by adopting the mechanism of human texture perception. First, four features of the human finger that were focused on in designing the novel sensor are introduced. Each feature is considered to play an important role in texture perception; the existence of nails and bone, the multiple layered structure of soft tissue, the distribution of mechanoreceptors, and the deployment of epidermal ridges. Next, detailed design of the texture sensor based on the design concept is explained, followed by evaluating experiments and analysis of the results. Finally, we conducted texture perceptive experiments of actual material using the developed sensor, thus achieving the information expected. Results show the potential of our approach.", "title": "" }, { "docid": "d725c63647485fd77412f16e1f6485f2", "text": "The ongoing discussions about a „digital revolution― and ―disruptive competitive advantages‖ have led to the creation of such a business vision as ―Industry 4.0‖. Yet, the term and even more its actual impact on businesses is still unclear.This paper addresses this gap and explores more specifically, the consequences and potentials of Industry 4.0 for the procurement, supply and distribution management functions. A blend of literature-based deductions and results from a qualitative study are used to explore the phenomenon.The findings indicate that technologies of Industry 4.0 legitimate the next level of maturity in procurement (Procurement &Supply Management 4.0). Empirical findings support these conceptual considerations, revealing the ambitious expectations.The sample comprises seven industries and the employed method is qualitative (telephone and face-to-face interviews). The empirical findings are only a basis for further quantitative investigation , however, they support the necessity and existence of the maturity level. The findings also reveal skepticism due to high investment costs but also very high expectations. As recent studies about digitalization are rather rare in the context of single company functions, this research work contributes to the understanding of digitalization and supply management.", "title": "" }, { "docid": "fe33ff51ca55bf745bdcdf8ee02e2d36", "text": "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.", "title": "" }, { "docid": "a87da46ab4026c566e3e42a5695fd8c9", "text": "Micro aerial vehicles (MAVs) are an excellent platform for autonomous exploration. Most MAVs rely mainly on cameras for buliding a map of the 3D environment. Therefore, vision-based MAVs require an efficient exploration algorithm to select viewpoints that provide informative measurements. In this paper, we propose an exploration approach that selects in real time the next-best-view that maximizes the expected information gain of new measurements. In addition, we take into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Finally, our approach selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. We implemented and thoroughly tested our approach and the experiments show that it offers an improved performance compared to other state-of-the-art algorithms in terms of precision of the reconstruction, execution time, and smoothness of the path.", "title": "" }, { "docid": "c8daa2571cd7808664d3dbe775cf60ab", "text": "OBJECTIVE\nTo review the research addressing the relationship of childhood trauma to psychosis and schizophrenia, and to discuss the theoretical and clinical implications.\n\n\nMETHOD\nRelevant studies and previous review papers were identified via computer literature searches.\n\n\nRESULTS\nSymptoms considered indicative of psychosis and schizophrenia, particularly hallucinations, are at least as strongly related to childhood abuse and neglect as many other mental health problems. Recent large-scale general population studies indicate the relationship is a causal one, with a dose-effect.\n\n\nCONCLUSION\nSeveral psychological and biological mechanisms by which childhood trauma increases risk for psychosis merit attention. Integration of these different levels of analysis may stimulate a more genuinely integrated bio-psycho-social model of psychosis than currently prevails. Clinical implications include the need for staff training in asking about abuse and the need to offer appropriate psychosocial treatments to patients who have been abused or neglected as children. Prevention issues are also identified.", "title": "" }, { "docid": "41f7d66c6e2c593eb7bda22c72a7c048", "text": "Artificial neural networks are algorithms that can be used to perform nonlinear statistical modeling and provide a new alternative to logistic regression, the most commonly used method for developing predictive models for dichotomous outcomes in medicine. Neural networks offer a number of advantages, including requiring less formal statistical training, ability to implicitly detect complex nonlinear relationships between dependent and independent variables, ability to detect all possible interactions between predictor variables, and the availability of multiple training algorithms. Disadvantages include its \"black box\" nature, greater computational burden, proneness to overfitting, and the empirical nature of model development. An overview of the features of neural networks and logistic regression is presented, and the advantages and disadvantages of using this modeling technique are discussed.", "title": "" }, { "docid": "3fdd3c02460972f12bb12b7cf30e2af4", "text": "A small but growing North American trend is the publication of maps of crime on the Internet. A number of web sites allow observers to view the spatial distribution of crime in various American cities, often to a considerable resolution, and increasingly in an interactive format. The use of Geographical Information Systems (GIS) technology to map crime is a rapidly expanding field that is, as this paper will explain, still in a developmental stage, and a number of technical and ethical issues remain to be resolved. The public right to information about local crime has to be balanced by a respect for the privacy of crime victims. Various techniques are being developed to assist crime mappers to aggregate spatial data, both to make their product easier to comprehend and to protect identification of the addresses of crime victims. These data aggregation techniques, while preventing identification of individuals, may also be inadvertently producing maps with the appearance of ‘greater risk’ in low crime areas. When some types of crime mapping have the potential to cause falling house prices, increasing insurance premiums or business abandonment, conflicts may exist between providing a public service and protecting the individual, leaving the cartographer vulnerable to litigation.", "title": "" }, { "docid": "7b215780b323aa3672d34ca243b1cf46", "text": "In this paper, we study the problem of semantic annotation on 3D models that are represented as shape graphs. A functional view is taken to represent localized information on graphs, so that annotations such as part segment or keypoint are nothing but 0-1 indicator vertex functions. Compared with images that are 2D grids, shape graphs are irregular and non-isomorphic data structures. To enable the prediction of vertex functions on them by convolutional neural networks, we resort to spectral CNN method that enables weight sharing by parametrizing kernels in the spectral domain spanned by graph Laplacian eigenbases. Under this setting, our network, named SyncSpecCNN, strives to overcome two key challenges: how to share coefficients and conduct multi-scale analysis in different parts of the graph for a single shape, and how to share information across related but different shapes that may be represented by very different graphs. Towards these goals, we introduce a spectral parametrization of dilated convolutional kernels and a spectral transformer network. Experimentally we tested SyncSpecCNN on various tasks, including 3D shape part segmentation and keypoint prediction. State-of-the-art performance has been achieved on all benchmark datasets.", "title": "" }, { "docid": "6a616f2aaa08ecf57236510cda926cad", "text": "While much work has focused on the design of actuators for inputting energy into robotic systems, less work has been dedicated to devices that remove energy in a controlled manner, especially in the field of soft robotics. Such devices have the potential to significantly modulate the dynamics of a system with very low required input power. In this letter, we leverage the concept of layer jamming, previously used for variable stiffness devices, to create a controllable, high force density, soft layer jamming brake (SLJB). We introduce the design, modeling, and performance analysis of the SLJB and demonstrate variable tensile resisting forces through the regulation of vacuum pressure. Further, we measure and model the tensile force with respect to different layer materials, vacuum pressures, and lengthening velocities, and show its ability to absorb energy during collisions. We hope to apply the SLJB in a number of applications in wearable technology.", "title": "" }, { "docid": "cc4458a843a2a6ffa86b4efd1956ffca", "text": "There is a growing interest in the use of chronic deep brain stimulation (DBS) for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. Fundamental questions remain about the physiologic effects and safety of DBS. Previous basic research studies have focused on the direct polarization of neuronal membranes by electrical stimulation. The goal of this paper is to provide information on the thermal effects of DBS using finite element models to investigate the magnitude and spatial distribution of DBS induced temperature changes. The parameters investigated include: stimulation waveform, lead selection, brain tissue electrical and thermal conductivity, blood perfusion, metabolic heat generation during the stimulation. Our results show that clinical deep brain stimulation protocols will increase the temperature of surrounding tissue by up to 0.8degC depending on stimulation/tissue parameters", "title": "" } ]
scidocsrr
3df95b78fb166e423159d9f34f006d7f
A Knowledge-Grounded Multimodal Search-Based Conversational Agent
[ { "docid": "54d3d5707e50b979688f7f030770611d", "text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.", "title": "" }, { "docid": "d4a1acf0fedca674145599b4aa546de0", "text": "Neural network models are capable of generating extremely natural sounding conversational interactions. However, these models have been mostly applied to casual scenarios (e.g., as “chatbots”) and have yet to demonstrate they can serve in more useful conversational applications. This paper presents a novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses. We generalize the widely-used Sequence-toSequence (SEQ2SEQ) approach by conditioning responses on both conversation history and external “facts”, allowing the model to be versatile and applicable in an open-domain setting. Our approach yields significant improvements over a competitive SEQ2SEQ baseline. Human judges found that our outputs are significantly more informative.", "title": "" }, { "docid": "ec78ecc16b9540e8b7dbe216770f726d", "text": "Multimodal machine translation is one of the applications that integrates computer vision and language processing. It is a unique task givent that in the field of machine translation, many state-of-the-arts algorithms still only employ textual information. In this work, we explore the effectiveness of reinforcement learning in multimodal machine translation. We present a novel algorithm based on the Advantage ActorCritic (A2C) algorithm that specifically cater to the multimodal machine translation task of the EMNLP 2018 Third Conference on Machine Translation (WMT18). We experiment our proposed algorithm on the Multi30K multilingual English-German image description dataset and the Flickr30K image entity dataset. Our model takes two channels of inputs, image and text, uses translation evaluation metrics as training rewards, and achieves better results than supervised learning MLE baseline models. Furthermore, we discuss the prospects and limitations of using reinforcement learning for machine translation. Our experiment results suggest a promising reinforcement learning solution to the general task of multimodal sequence to sequence learning.", "title": "" }, { "docid": "fe9724a94d1aa13e4fbefa7c88ac09dd", "text": "We demonstrate a multimodal dialogue system using reinforcement learning for in-car scenarios, developed at Edinburgh University and Cambridge University for the TALK project1. This prototype is the first “Information State Update” (ISU) dialogue system to exhibit reinforcement learning of dialogue strategies, and also has a fragmentary clarification feature. This paper describes the main components and functionality of the system, as well as the purposes and future use of the system, and surveys the research issues involved in its construction. Evaluation of this system (i.e. comparing the baseline system with handcoded vs. learnt dialogue policies) is ongoing, and the demonstration will show both.", "title": "" } ]
[ { "docid": "2a451c58ee4d7959857a3a7a0397300d", "text": "The Software Defined Networking (SDN) paradigm introduces separation of data and control planes for flow-switched networks and enables different approaches to network security than those existing in present IP networks. The centralized control plane, i.e. the SDN controller, can host new security services that profit from the global view of the network and from direct control of switches. Some security services can be deployed as external applications that communicate with the controller. Due to the fact that all unknown traffic must be transmitted for investigation to the controller, maliciously crafted traffic can lead to Denial Of Service (DoS) attack on it. In this paper we analyse features of SDN in the context of security application. Additionally we point out some aspects of SDN networks that, if changed, could improve SDN network security capabilities. Moreover, the last section of the paper presents a detailed description of security application that detects a broad kind of malicious activity using key features of SDN architecture.", "title": "" }, { "docid": "14b9aaa9ff0be3ed0a8d420fb63f54dd", "text": "Stream reasoning studies the application of inference techniques to data characterised by being highly dynamic. It can find application in several settings, from Smart Cities to Industry 4.0, from Internet of Things to Social Media analytics. This year stream reasoning turns ten, and in this article we analyse its growth. In the first part, we trace the main results obtained so far, by presenting the most prominent studies. We start by an overview of the most relevant studies developed in the context of semantic web, and then we extend the analysis to include contributions from adjacent areas, such as database and artificial intelligence. Looking at the past is useful to prepare for the future: in the second part, we present a set of open challenges and issues that stream reasoning will face in the next future.", "title": "" }, { "docid": "7c171e744df03df658c02e899e197bd4", "text": "In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low-spontaneous rate fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber deafferentation occurs in humans and may predict how well individuals can hear in noisy environments.", "title": "" }, { "docid": "945cf1645df24629842c5e341c3822e7", "text": "Cloud computing economically enables the paradigm of data service outsourcing. However, to protect data privacy, sensitive cloud data have to be encrypted before outsourced to the commercial public cloud, which makes effective data utilization service a very challenging task. Although traditional searchable encryption techniques allow users to securely search over encrypted data through keywords, they support only Boolean search and are not yet sufficient to meet the effective data utilization need that is inherently demanded by large number of users and huge amount of data files in cloud. In this paper, we define and solve the problem of secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by enabling search result relevance ranking instead of sending undifferentiated results, and further ensures the file retrieval accuracy. Specifically, we explore the statistical measure approach, i.e., relevance score, from information retrieval to build a secure searchable index, and develop a one-to-many order-preserving mapping technique to properly protect those sensitive score information. The resulting design is able to facilitate efficient server-side ranking without losing keyword privacy. Thorough analysis shows that our proposed solution enjoys “as-strong-as-possible” security guarantee compared to previous searchable encryption schemes, while correctly realizing the goal of ranked keyword search. Extensive experimental results demonstrate the efficiency of the proposed solution.", "title": "" }, { "docid": "3ab85b8f58e60f4e59d6be49648ce290", "text": "It is basically a solved problem for a server to authenticate itself to a client using standard methods of Public Key cryptography. The Public Key Infrastructure (PKI) supports the SSL protocol which in turn enables this functionality. The single-point-of-failure in PKI, and hence the focus of attacks, is the Certi cation Authority. However this entity is commonly o -line, well defended, and not easily got at. For a client to authenticate itself to the server is much more problematical. The simplest and most common mechanism is Username/Password. Although not at all satisfactory, the only onus on the client is to generate and remember a password and the reality is that we cannot expect a client to be su ciently sophisticated or well organised to protect larger secrets. However Username/Password as a mechanism is breaking down. So-called zero-day attacks on servers commonly recover les containing information related to passwords, and unless the passwords are of su ciently high entropy they will be found. The commonly applied patch is to insist that clients adopt long, complex, hard-to-remember passwords. This is essentially a second line of defence imposed on the client to protect them in the (increasingly likely) event that the authentication server will be successfully hacked. Note that in an ideal world a client should be able to use a low entropy password, as a server can limit the number of attempts the client can make to authenticate itself. The often proposed alternative is the adoption of multifactor authentication. In the simplest case the client must demonstrate possession of both a token and a password. The banks have been to the forefront of adopting such methods, but the token is invariably a physical device of some kind. Cryptography's embarrassing secret is that to date no completely satisfactory means has been discovered to implement two-factor authentication entirely in software. In this paper we propose such a scheme.", "title": "" }, { "docid": "e943bc89e2b8318ce30002a68ee84124", "text": "Evaluation has become a fundamental part of visualization research and researchers have employed many approaches from the field of human-computer interaction like measures of task performance, thinking aloud protocols, and analysis of interaction logs. Recently, eye tracking has also become popular to analyze visual strategies of users in this context. This has added another modality and more data, which requires special visualization techniques to analyze this data. However, only few approaches exist that aim at an integrated analysis of multiple concurrent evaluation procedures. The variety, complexity, and sheer amount of such coupled multi-source data streams require a visual analytics approach. Our approach provides a highly interactive visualization environment to display and analyze thinking aloud, interaction, and eye movement data in close relation. Automatic pattern finding algorithms allow an efficient exploratory search and support the reasoning process to derive common eye-interaction-thinking patterns between participants. In addition, our tool equips researchers with mechanisms for searching and verifying expected usage patterns. We apply our approach to a user study involving a visual analytics application and we discuss insights gained from this joint analysis. We anticipate our approach to be applicable to other combinations of evaluation techniques and a broad class of visualization applications.", "title": "" }, { "docid": "7f605604647564e67c5d910003a9707a", "text": "Given a query consisting of a mention (name string) and a background document, entity disambiguation calls for linking the mention to an entity from reference knowledge base like Wikipedia. Existing studies typically use hand-crafted features to represent mention, context and entity, which is laborintensive and weak to discover explanatory factors of data. In this paper, we address this problem by presenting a new neural network approach. The model takes consideration of the semantic representations of mention, context and entity, encodes them in continuous vector space and effectively leverages them for entity disambiguation. Specifically, we model variable-sized contexts with convolutional neural network, and embed the positions of context words to factor in the distance between context word and mention. Furthermore, we employ neural tensor network to model the semantic interactions between context and mention. We conduct experiments for entity disambiguation on two benchmark datasets from TAC-KBP 2009 and 2010. Experimental results show that our method yields state-of-the-art performances on both datasets.", "title": "" }, { "docid": "b447aec2deaa67788560efe1d136be31", "text": "This paper addresses the design, construction and control issues of a novel biomimetic robotic dolphin equipped with mechanical flippers, based on an engineered propulsive model. The robotic dolphin is modeled as a three-segment organism composed of a rigid anterior body, a flexible rear body and an oscillating fluke. The dorsoventral movement of the tail produces the thrust and bending of the anterior body in the horizontal plane enables turning maneuvers. A dualmicrocontroller structure is adopted to drive the oscillating multi-link rear body and the mechanical flippers. Experimental results primarily confirm the effectiveness of the dolphin-like movement in propulsion and maneuvering.", "title": "" }, { "docid": "1394eaac58304e5d6f951ca193e0be40", "text": "We introduce low-cost hardware for performing non-invasive side-channel attacks on Radio Frequency Identi cation Devices (RFID) and develop techniques for facilitating a correlation power analysis (CPA) in the presence of the eld of an RFID reader. We practically verify the e ectiveness of the developed methods by analysing the security of commercial contactless smartcards employing strong cryptography, pinpointing weaknesses in the protocol and revealing a vulnerability towards side-channel attacks. Employing the developed hardware, we present the rst successful key-recovery attack on commercially available contactless smartcards based on the Data Encryption Standard (DES) or TripleDES (3DES) cipher that are widely used for security-sensitive applications, e.g., payment purposes.", "title": "" }, { "docid": "bd1a13c94d0e12b4ba9f14fef47d2564", "text": "Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f = u+ η, and η is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle’s projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation. Source Code ANSI C source code to produce the same results as the demo is accessible at the IPOL web page of this article1.", "title": "" }, { "docid": "092239f41a6e216411174e5ed9dceee2", "text": "In this paper, we propose a simple but effective specular highlight removal method using a single input image. Our method is based on a key observation the maximum fraction of the diffuse color component (so called maximum diffuse chromaticity in the literature) in local patches in color images changes smoothly. Using this property, we can estimate the maximum diffuse chromaticity values of the specular pixels by directly applying low-pass filter to the maximum fraction of the color components of the original image, such that the maximum diffuse chromaticity values can be propagated from the diffuse pixels to the specular pixels. The diffuse color at each pixel can then be computed as a nonlinear function of the estimated maximum diffuse chromaticity. Our method can be directly extended for multi-color surfaces if edge-preserving filters (e.g., bilateral filter) are used such that the smoothing can be guided by the maximum diffuse chromaticity. But maximum diffuse chromaticity is to be estimated. We thus present an approximation and demonstrate its effectiveness. Recent development in fast bilateral filtering techniques enables our method to run over 200× faster than the state-of-the-art on a standard CPU and differentiates our method from previous work.", "title": "" }, { "docid": "9b1769eb8e1991c5e1bb6b58c806d249", "text": "Online reviews play a crucial role in today's electronic commerce. Due to the pervasive spam reviews, customers can be misled to buy low-quality products, while decent stores can be defamed by malicious reviews. We observe that, in reality, a great portion (> 90% in the data we study) of the reviewers write only one review (singleton review). These reviews are so enormous in number that they can almost determine a store's rating and impression. However, existing methods ignore these reviewers. To address this problem, we observe that the normal reviewers' arrival pattern is stable and uncorrelated to their rating pattern temporally. In contrast, spam attacks are usually bursty and either positively or negatively correlated to the rating. Thus, we propose to detect such attacks via unusually correlated temporal patterns. We identify and construct multidimensional time series based on aggregate statistics, in order to depict and mine such correlation. Experimental results show that the proposed method is effective in detecting singleton review attacks. We discover that singleton review is a significant source of spam reviews and largely affects the ratings of online stores.", "title": "" }, { "docid": "a8695230b065ae2e4c5308dfe4f8c10e", "text": "The paper describes a solution for the Yandex Personalized Web Search Challenge. The goal of the challenge is to rerank top ten web search query results to bring most personally relevant results on the top, thereby improving the search quality. The paper focuses on feature engineering for learning to rank in web search, including a novel pair-wise feature, shortand long-term personal navigation features. The paper demonstrates that point-wise logistic regression can achieve the stat-of-the-art performance in terms of normalized discounted cumulative gain with capability to scale up.", "title": "" }, { "docid": "30bc96451dd979a8c08810415e4a2478", "text": "An adaptive circulator fabricated on a 130 nm CMOS is presented. Circulator has two adaptive blocks for gain and phase mismatch correction and leakage cancelation. The impedance matching circuit corrects mismatches for antenna, divider, and LNTA. The cancelation block cancels the Tx leakage. Measured isolation between transmitter and receiver for single tone at 2.4 GHz is 90 dB, and for a 40 MHz wide-band signal is 50dB. The circulator Rx gain is 10 dB, with NF = 4.7 dB and 5 dB insertion loss.", "title": "" }, { "docid": "0254d49cb759e163a032b6557f969bd3", "text": "The smart electricity grid enables a two-way flow of power and data between suppliers and consumers in order to facilitate the power flow optimization in terms of economic efficiency, reliability and sustainability. This infrastructure permits the consumers and the micro-energy producers to take a more active role in the electricity market and the dynamic energy management (DEM). The most important challenge in a smart grid (SG) is how to take advantage of the users’ participation in order to reduce the cost of power. However, effective DEM depends critically on load and renewable production forecasting. This calls for intelligent methods and solutions for the real-time exploitation of the large volumes of data generated by a vast amount of smart meters. Hence, robust data analytics, high performance computing, efficient data network management, and cloud computing techniques are critical towards the optimized operation of SGs. This research aims to highlight the big data issues and challenges faced by the DEM employed in SG networks. It also provides a brief description of the most commonly used data processing methods in the literature, and proposes a promising direction for future research in the field.", "title": "" }, { "docid": "9a2914fcec073e83674fcf7eb3837602", "text": "In this technical note, a new approach for the stability analysis and controller synthesis of networked control systems (NCSs) with uncertain, time-varying, network delays is presented. Based on the Jordan form of the continuous-time plant, a discrete-time representation of the NCS is derived. Using this model for delays that can be both smaller and larger than the sampling interval, sufficient LMI conditions for stability and feedback stabilization are proposed. The results are illustrated by a typical motion control example.", "title": "" }, { "docid": "bca053718bbcc09d6831b2ed36d717e4", "text": "Plagiarism has become one area of interest for researchers due to its importance, and its fast growing rates. In this paper we are going to survey and list the advantage sand disadvantages of the latest and the important effective methods used or developed in automatic plagiarism detection, according to their result. Mainly methods used in natural language text detection, index structure, and external plagiarism detection and clustering -- based detection.", "title": "" }, { "docid": "90f90bee3fa1f66b7eb9c7da0f5a6d8e", "text": "Stack Overflow is a popular questions and answers (Q&A) website among software developers. It counts more than two millions of users who actively contribute by asking and answering thousands of questions daily. Identifying and reviewing low quality posts preserves the quality of site's contents and it is crucial to maintain a good user experience. In Stack Overflow the identification of poor quality posts is performed by selected users manually. The system also uses an automated identification system based on textual features. Low quality posts automatically enter a review queue maintained by experienced users. We present an approach to improve the automated system in use at Stack Overflow. It analyzes both the content of a post (e.g., simple textual features and complex readability metrics) and community-related aspects (e.g., popularity of a user in the community). Our approach reduces the size of the review queue effectively and removes misclassified good quality posts.", "title": "" }, { "docid": "c3558d8f79cd8a7f53d8b6073c9a7db3", "text": "De novo assembly of RNA-seq data enables researchers to study transcriptomes without the need for a genome sequence; this approach can be usefully applied, for instance, in research on 'non-model organisms' of ecological and evolutionary importance, cancer samples or the microbiome. In this protocol we describe the use of the Trinity platform for de novo transcriptome assembly from RNA-seq data in non-model organisms. We also present Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes. In the procedure, we provide a workflow for genome-independent transcriptome analysis leveraging the Trinity platform. The software, documentation and demonstrations are freely available from http://trinityrnaseq.sourceforge.net. The run time of this protocol is highly dependent on the size and complexity of data to be analyzed. The example data set analyzed in the procedure detailed herein can be processed in less than 5 h.", "title": "" }, { "docid": "7bb1d856e5703afb571cf781d48ce403", "text": "RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction.", "title": "" } ]
scidocsrr
4bd5e40fbf2367af198a7911aacf51d6
Video Question Answering via Attribute-Augmented Attention Network Learning
[ { "docid": "6a1e614288a7977b72c8037d9d7725fb", "text": "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.", "title": "" }, { "docid": "0060fbebb60c7f67d8750826262d7135", "text": "This paper introduces a web image search reranking approach that explores multiple modalities in a graph-based learning scheme. Different from the conventional methods that usually adopt a single modality or integrate multiple modalities into a long feature vector, our approach can effectively integrate the learning of relevance scores, weights of modalities, and the distance metric and its scaling for each modality into a unified scheme. In this way, the effects of different modalities can be adaptively modulated and better reranking performance can be achieved. We conduct experiments on a large dataset that contains more than 1000 queries and 1 million images to evaluate our approach. Experimental results demonstrate that the proposed reranking approach is more robust than using each individual modality, and it also performs better than many existing methods.", "title": "" } ]
[ { "docid": "c3f25271d25590bf76b36fee4043d227", "text": "Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.", "title": "" }, { "docid": "afe1711ee0fbd412f0b425c488f46fbc", "text": "The Iterated Prisoner’s Dilemma has guided research on social dilemmas for decades. However, it distinguishes between only two atomic actions: cooperate and defect. In real world prisoner’s dilemmas, these choices are temporally extended and different strategies may correspond to sequences of actions, reflecting grades of cooperation. We introduce a Sequential Prisoner’s Dilemma (SPD) game to better capture the aforementioned characteristics. In this work, we propose a deep multiagent reinforcement learning approach that investigates the evolution of mutual cooperation in SPD games. Our approach consists of two phases. The first phase is offline: it synthesizes policies with different cooperation degrees and then trains a cooperation degree detection network. The second phase is online: an agent adaptively selects its policy based on the detected degree of opponent cooperation. The effectiveness of our approach is demonstrated in two representative SPD 2D games: the Apple-Pear game and the Fruit Gathering game. Experimental results show that our strategy can avoid being exploited by exploitative opponents and achieve cooperation with cooperative opponents.", "title": "" }, { "docid": "011d0fa5eac3128d5127a66741689df7", "text": "Tweets often contain a large proportion of abbreviations, alternative spellings, novel words and other non-canonical language. These features are problematic for standard language analysis tools and it can be desirable to convert them to canonical form. We propose a novel text normalization model based on learning edit operations from labeled data while incorporating features induced from unlabeled data via character-level neural text embeddings. The text embeddings are generated using an Simple Recurrent Network. We find that enriching the feature set with text embeddings substantially lowers word error rates on an English tweet normalization dataset. Our model improves on stateof-the-art with little training data and without any lexical resources.", "title": "" }, { "docid": "1d2b45d990059df15c4fb3c76c67c39d", "text": "Wireless networks with their ubiquitous applications have become an indispensable part of our daily lives. Wireless networks demand more and more spectral resources to support the ever increasing numbers of users. According to network engineers, the current spectrum crunch can be addressed with the introduction of cognitive radio networks (CRNs). In half-duplex (HD) CRNs, the secondary users (SUs) can either only sense the spectrum or transmit at a given time. This HD operation limits the SU throughput, because the SUs cannot transmit during the spectrum sensing. However, with the advances in self-interference suppression (SIS), full-duplex (FD) CRNs allow for simultaneous spectrum sensing and transmission on a given channel. This FD operation increases the throughput and reduces collisions as compared with HD-CRNs. In this paper, we present a comprehensive survey of FD-CRN communications. We cover the supporting network architectures and the various transmit and receive antenna designs. We classify the different SIS approaches in FD-CRNs. We survey the spectrum sensing approaches and security requirements for FD-CRNs. We also survey major advances in FD medium access control protocols as well as open issues, challenges, and future research directions to support the FD operation in CRNs.", "title": "" }, { "docid": "062fb8603fe65ddde2be90bac0519f97", "text": "Meta-heuristic methods represent very powerful tools for dealing with hard combinatorial optimization problems. However, real life instances usually cannot be treated efficiently in \"reasonable\" computing times. Moreover, a major issue in metaheuristic design and calibration is to make them robust, i.e., to provide high performance solutions for a variety of problem settings. Parallel meta-heuristics aim to address both issues. The objective of this chapter is to present a state-of-the-art survey of the main parallel meta-heuristic ideas and strategies, and to discuss general design principles applicable to all meta-heuristic classes. To achieve this goal, we explain various paradigms related to parallel meta-heuristic development, where communications, synchronization and control aspects are the most relevant. We also discuss implementation issues, namely the influence of the target architecture on parallel execution of meta-heuristics, pointing out the characteristics of shared and distributed memory multiprocessor systems. All these topics are illustrated by examples from recent literature. These examples are related to the parallelization of various meta-heuristic methods, but we focus here on Variable Neighborhood Search and Bee Colony Optimization.", "title": "" }, { "docid": "eb6636299df817817aa49f1f8dad04f5", "text": "This paper introduces a new generative deep learning network for human motion synthesis and control. Our key idea is to combine recurrent neural networks (RNNs) and adversarial training for human motion modeling. We first describe an efficient method for training a RNNs model from prerecorded motion data. We implement recurrent neural networks with long short-term memory (LSTM) cells because they are capable of handling nonlinear dynamics and long term temporal dependencies present in human motions. Next, we train a refiner network using an adversarial loss, similar to Generative Adversarial Networks (GANs), such that the refined motion sequences are indistinguishable from real motion capture data using a discriminative network. We embed contact information into the generative deep learning model to further improve the performance of our generative model. The resulting model is appealing to motion synthesis and control because it is compact, contact-aware, and can generate an infinite number of naturally looking motions with infinite lengths. Our experiments show that motions generated by our deep learning model are always highly realistic and comparable to high-quality motion capture data. We demonstrate the power and effectiveness of our models by exploring a variety of applications, ranging from random motion synthesis, online/offline motion control, and motion filtering. We show the superiority of our generative model by comparison against baseline models.", "title": "" }, { "docid": "87c1d39dd39375f40306416077f3cb22", "text": "For any AND-OR formula of size N, there exists a bounded-error N1/2+o(1)-time quantum algorithm, based on a discrete-time quantum walk, that evaluates this formula on a black-box input. Balanced, or \"approximately balanced,\" formulas can be evaluated in O(radicN) queries, which is optimal. It follows that the (2-o(1))th power of the quantum query complexity is a lower bound on the formula size, almost solving in the positive an open problem posed by Laplante, Lee and Szegedy.", "title": "" }, { "docid": "cdf78bab8d93eda7ccbb41674d24b1a2", "text": "OBJECTIVE\nThe U.S. Food and Drug Administration and Institute of Medicine are currently investigating front-of-package (FOP) food labelling systems to provide science-based guidance to the food industry. The present paper reviews the literature on FOP labelling and supermarket shelf-labelling systems published or under review by February 2011 to inform current investigations and identify areas of future research.\n\n\nDESIGN\nA structured search was undertaken of research studies on consumer use, understanding of, preference for, perception of and behaviours relating to FOP/shelf labelling published between January 2004 and February 2011.\n\n\nRESULTS\nTwenty-eight studies from a structured search met inclusion criteria. Reviewed studies examined consumer preferences, understanding and use of different labelling systems as well as label impact on purchasing patterns and industry product reformulation.\n\n\nCONCLUSIONS\nThe findings indicate that the Multiple Traffic Light system has most consistently helped consumers identify healthier products; however, additional research on different labelling systems' abilities to influence consumer behaviour is needed.", "title": "" }, { "docid": "926db14af35f9682c28a64e855fb76e5", "text": "This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.", "title": "" }, { "docid": "5c32ca62b8ffcc8dd59f424e02a542cd", "text": "We develop a systematic approach for analyzing client-server applications that aim to hide sensitive user data from untrusted servers. We then apply it to Mylar, a framework that uses multi-key searchable encryption (MKSE) to build Web applications on top of encrypted data.\n We demonstrate that (1) the Popa-Zeldovich model for MKSE does not imply security against either passive or active attacks; (2) Mylar-based Web applications reveal users' data and queries to passive and active adversarial servers; and (3) Mylar is generically insecure against active attacks due to system design flaws. Our results show that the problem of securing client-server applications against actively malicious servers is challenging and still unsolved.\n We conclude with general lessons for the designers of systems that rely on property-preserving or searchable encryption to protect data from untrusted servers.", "title": "" }, { "docid": "dcc9490a771e5b2758181424b0407306", "text": "An ultra-low power wake-up receiver for 2.4-GHz wireless sensor networks, based on a fast sampling method, is presented. A novel multi-branch receiver architecture covers a wide range of interferer scenarios for highly occupied radio channels. The scalability of current consumption versus data rate at a constant sensitivity is another useful feature that fits a multitude of applications, requiring both short reaction times and ultra-low power consumption. The 2.4-GHz OOK receiver comprises a 3-branch analog superheterodyne front-end and six digital 31-bit correlating decoders. It is fabricated in a 130-nm CMOS technology. The current consumption is 2.9 μA at 2.5 V supply voltage and a reaction time of 30 ms. The receiver sensitivity is -80 dBm. Among other sub-100 μW state-of-the-art receivers, the presented implementation shows the best reported sensitivity.", "title": "" }, { "docid": "14e4e6eee832f85ec8b2e2ee5e60de1c", "text": "In this paper, we propose the utterance-level permutation invariant training uPIT technique. uPIT is a practically applicable, end-to-end, deep-learning-based solution for speaker independent multitalker speech separation. Specifically, uPIT extends the recently proposed permutation invariant training PIT technique with an utterance-level cost function, hence eliminating the need for solving an additional permutation problem during inference, which is otherwise required by frame-level PIT. We achieve this using recurrent neural networks RNNs that, during training, minimize the utterance-level separation error, hence forcing separated frames belonging to the same speaker to be aligned to the same output stream. In practice, this allows RNNs, trained with uPIT, to separate multitalker mixed speech without any prior knowledge of signal duration, number of speakers, speaker identity, or gender. We evaluated uPIT on the WSJ0 and Danish two- and three-talker mixed-speech separation tasks and found that uPIT outperforms techniques based on nonnegative matrix factorization and computational auditory scene analysis, and compares favorably with deep clustering, and the deep attractor network. Furthermore, we found that models trained with uPIT generalize well to unseen speakers and languages. Finally, we found that a single model, trained with uPIT, can handle both two-speaker, and three-speaker speech mixtures.", "title": "" }, { "docid": "301bc00e99607569dcba6317ebb2f10d", "text": "Bandwidth and gain enhancement of microstrip patch antennas (MPAs) is proposed using reflective metasurface (RMS) as a superstrate. Two different types of the RMS, namelythe double split-ring resonator (DSR) and double closed-ring resonator (DCR) are separately investigated. The two antenna prototypes were manufactured, measured and compared. The experimental results confirm that the RMS loaded MPAs achieve high-gain as well as bandwidth improvement. The desinged antenna using the RMS as a superstrate has a high-gain of over 9.0 dBi and a wide impedance bandwidth of over 13%. The RMS is also utilized to achieve a thin antenna with a cavity height of 6 mm, which is equivalent to λ/21 at the center frequency of 2.45 GHz. At the same time, the cross polarization level and front-to-back ratio of these antennas are also examined. key words: wideband, high-gain, metamaterial, Fabry-Perot cavity (FPC), frequency selective surface (FSS)", "title": "" }, { "docid": "ecb93affc7c9b0e4bf86949d3f2006d4", "text": "We present data-dependent learning bounds for the general scenario of non-stationary nonmixing stochastic processes. Our learning guarantees are expressed in terms of a datadependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions. We also also provide novel analysis of stable time series forecasting algorithm using this new notion of discrepancy that we introduce. We use our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results. An extended abstract has appeared in (Kuznetsov and Mohri, 2015).", "title": "" }, { "docid": "0acf9ef6e025805a76279d1c6c6c55e7", "text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.", "title": "" }, { "docid": "d38f389809b9ed973e3b92216496909c", "text": "Bullwhip effect in the supply chain distribution network is a phenomenon that is highly avoided because it can lead to high operational costs. It drew the attention of researchers to examine ways to minimize the bullwhip effect. Bullwhip effect occurs because of incorrect company planning in pursuit of customer demand. Bullwhip effect occurs due to increased amplitude of demand variance towards upper supply chain level. If the product handled is a perishable product it will make the bullwhip effect more sensitive. The purpose of this systematic literature review is to map out some of the variables used in constructing mathematical models to minimize the bullwhip effect on food supply chains that have perishable product characteristics. The result of this systematic literature review is that the authors propose an appropriate optimization model that will be applied in the food supply chain sales on the train in Indonesian railways in the next research.", "title": "" }, { "docid": "088e317a01fba8ac42a72a5be9144daa", "text": "The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.", "title": "" }, { "docid": "5f4e761af11ace5a4d6819431893a605", "text": "The high power density converter is required due to the strict demands of volume and weight in more electric aircraft, which makes SiC extremely attractive for this application. In this work, a prototype of 50 kW SiC high power density converter with the topology of two-level three-phase voltage source inverter is demonstrated. This converter is driven at high switching speed based on the optimization in switching characterization. It operates at a switching frequency up to 100 kHz and a low dead time of 250 ns. And the converter efficiency is measured to be 99% at 40 kHz and 97.8% at 100 kHz.", "title": "" }, { "docid": "7edddf437e1759b8b13821670f52f4ba", "text": "This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.", "title": "" }, { "docid": "ef779863c1ca2e8eab8198b5a8ebb503", "text": "Due to the ongoing debate regarding the definitions and measurement of cyberbullying, the present article critically appraises the existing literature and offers direction regarding the question of how best to conceptualise peer-to-peer abuse in a cyber context. Variations across definitions are problematic as it has been argued that inconsistencies with regard to definitions result in researchers examining different phenomena, whilst the absence of an agreed conceptualisation of the behaviour(s) involved hinders the development of reliable and valid measures. Existing definitions of cyberbullying often incorporate the criteria of traditional bullying such as intent to harm, repetition, and imbalance of power. However, due to the unique nature of cyber-based communication, it can be difficult to identify such criteria in relation to cyber-based abuse. Thus, for these reasons cyberbullying may not be the most appropriate term. Rather than attempting to “shoe-horn” this abusive behaviour into the preconceived conceptual framework that provides an understanding of traditional bullying, it is timely to take an alternative approach. We argue that it is now time to turn our attention to the broader issue of cyber aggression, rather than persist with the narrow focus that is cyberbullying.", "title": "" } ]
scidocsrr
a59192878d1e9e2748ae9c92aea5235c
A computational framework for detecting offensive language with support vector machine in social communities
[ { "docid": "cc8ce41d7ae2bb0d92fa51cb26769aa1", "text": "185 All Rights Reserved © 2012 IJARCET Abstract-With increasing amounts of data being generated by businesses and researchers there is a need for fast, accurate and robust algorithms for data analysis. Improvements in databases technology, computing performance and artificial intelligence have contributed to the development of intelligent data analysis. Support vector machines are a specific type of machine learning algorithm that are among the most widelyused for many statistical learning problems, such as spam filtering, text classification, handwriting analysis, face and object recognition, and countless others. Support vector machines have also come into widespread use in practically every area of bioinformatics within the last ten years, and their area of influence continues to expand today. The support vector machine has been developed as robust tool for classification and regression in noisy, complex domains. The two key features of support vector machines are generalization theory, which leads to a principled way to choose an hypothesis; and, kernel functions, which introduce nonlinearity in the hypothesis space without explicitly requiring a non-linear algorithm.", "title": "" }, { "docid": "df75c48628144cdbcf974502ea24aa24", "text": "Standard SVM training has O(m3) time andO(m2) space complexities, where m is the training set size. It is thus computationally infeasible on very larg e data sets. By observing that practical SVM implementations onlyapproximatethe optimal solution by an iterative strategy, we scale up kernel methods by exploiting such “approximateness” in t h s paper. We first show that many kernel methods can be equivalently formulated as minimum en closing ball (MEB) problems in computational geometry. Then, by adopting an efficient appr oximate MEB algorithm, we obtain provably approximately optimal solutions with the idea of c re sets. Our proposed Core Vector Machine (CVM) algorithm can be used with nonlinear kernels a nd has a time complexity that is linear in m and a space complexity that is independent of m. Experiments on large toy and realworld data sets demonstrate that the CVM is as accurate as exi sting SVM implementations, but is much faster and can handle much larger data sets than existin g scale-up methods. For example, CVM with the Gaussian kernel produces superior results on th e KDDCUP-99 intrusion detection data, which has about five million training patterns, in only 1.4 seconds on a 3.2GHz Pentium–4 PC.", "title": "" } ]
[ { "docid": "298d3280deb3bb326314a7324d135911", "text": "BACKGROUND\nUterine leiomyomas are rarely seen in adolescent and to date nine leiomyoma cases have been reported under age 17. Eight of these have been treated surgically via laparotomic myomectomy.\n\n\nCASE\nA 16-year-old girl presented with a painless, lobulated necrotic mass protruding through the introitus. The mass originated from posterior uterine wall resected using hysteroscopy. Final pathology report revealed a submucous uterine leiomyoma.\n\n\nSUMMARY AND CONCLUSION\nSubmucous uterine leiomyomas may present as a vaginal mass in adolescents and can be safely treated using hysteroscopy.", "title": "" }, { "docid": "26282a6d69b021755e5b02f8798bdcb9", "text": "Recently, extensive research efforts have been dedicated to view-based methods for 3-D object retrieval due to the highly discriminative property of multiviews for 3-D object representation. However, most of state-of-the-art approaches highly depend on their own camera array settings for capturing views of 3-D objects. In order to move toward a general framework for 3-D object retrieval without the limitation of camera array restriction, a camera constraint-free view-based (CCFV) 3-D object retrieval algorithm is proposed in this paper. In this framework, each object is represented by a free set of views, which means that these views can be captured from any direction without camera constraint. For each query object, we first cluster all query views to generate the view clusters, which are then used to build the query models. For a more accurate 3-D object comparison, a positive matching model and a negative matching model are individually trained using positive and negative matched samples, respectively. The CCFV model is generated on the basis of the query Gaussian models by combining the positive matching model and the negative matching model. The CCFV removes the constraint of static camera array settings for view capturing and can be applied to any view-based 3-D object database. We conduct experiments on the National Taiwan University 3-D model database and the ETH 3-D object database. Experimental results show that the proposed scheme can achieve better performance than state-of-the-art methods.", "title": "" }, { "docid": "8b46e6e341f4fdf4eb18e66f237c4000", "text": "We present a general learning-based approach for phrase-level sentiment analysis that adopts an ordinal sentiment scale and is explicitly compositional in nature. Thus, we can model the compositional effects required for accurate assignment of phrase-level sentiment. For example, combining an adverb (e.g., “very”) with a positive polar adjective (e.g., “good”) produces a phrase (“very good”) with increased polarity over the adjective alone. Inspired by recent work on distributional approaches to compositionality, we model each word as a matrix and combine words using iterated matrix multiplication, which allows for the modeling of both additive and multiplicative semantic effects. Although the multiplication-based matrix-space framework has been shown to be a theoretically elegant way to model composition (Rudolph and Giesbrecht, 2010), training such models has to be done carefully: the optimization is nonconvex and requires a good initial starting point. This paper presents the first such algorithm for learning a matrix-space model for semantic composition. In the context of the phrase-level sentiment analysis task, our experimental results show statistically significant improvements in performance over a bagof-words model.", "title": "" }, { "docid": "947d4c60427377bcb466fe1393c5474c", "text": "This paper presents a single BCD technology platform with high performance power devices at a wide range of operating voltages. The platform offers 6 V to 70 V LDMOS devices. All devices offer best-in-class specific on-resistance of 20 to 40 % lower than that of the state-of-the-art IC-based LDMOS devices and robustness better than the square SOA (safe-operating-area). Fully isolated LDMOS devices, in which independent bias is capable for circuit flexibility, demonstrate superior specific on-resistance (e.g. 11.9 mΩ-mm2 for breakdown voltage of 39 V). Moreover, the unusual sudden current enhancement appeared in the ID-VD saturation region of most of the high voltage LDMOS devices is significantly suppressed.", "title": "" }, { "docid": "0b631a4139efb14c1fe43876b29cf1c6", "text": "In recent years, remote sensing image data have increased significantly due to the improvement of remote sensing technique. On the other hand, data acquisition rate will also be accelerated by increasing satellite sensors. Hence, it is a large challenge to make full use of so considerable data by conventional retrieval approach. The lack of semantic based retrieval capability has impeded application of remote sensing data. To address the issue, we propose a framework based on domain-dependent ontology to perform semantic retrieval in image archives. Firstly, primitive features expressed by color and texture are extracted to gain homogeneous region by means of our unsupervised algorithm. The homogeneous regions are described by high-level concepts depicted and organized by domain specific ontology. Interactive learning technique is employed to associate regions and high-level concepts. These associations are used to perform querying task. Additionally, a reasoning mechanism over ontology integrating an inference engine is discussed. It enables the capability of semantic query in archives by mining the interrelationships among domain concepts and their properties to satisfy users’ requirements. In our framework, ontology is used to provide a sharable and reusable concept set as infrastructure for high level extension such as reasoning. Finally, preliminary results are present and future work is also discussed. KeywordsImage retrieval; Ontology; Semantic reasoning;", "title": "" }, { "docid": "adf6ac64c2c1af405e9500ce1ea35cf2", "text": "Mining detailed opinions buried in the vast amount of review text data is an important, yet quite challenging task with widespread applications in multiple domains. Latent Aspect Rating Analysis (LARA) refers to the task of inferring both opinion ratings on topical aspects (e.g., location, service of a hotel) and the relative weights reviewers have placed on each aspect based on review content and the associated overall ratings. A major limitation of previous work on LARA is the assumption of pre-specified aspects by keywords. However, the aspect information is not always available, and it may be difficult to pre-define appropriate aspects without a good knowledge about what aspects are actually commented on in the reviews.\n In this paper, we propose a unified generative model for LARA, which does not need pre-specified aspect keywords and simultaneously mines 1) latent topical aspects, 2) ratings on each identified aspect, and 3) weights placed on different aspects by a reviewer. Experiment results on two different review data sets demonstrate that the proposed model can effectively perform the Latent Aspect Rating Analysis task without the supervision of aspect keywords. Because of its generality, the proposed model can be applied to explore all kinds of opinionated text data containing overall sentiment judgments and support a wide range of interesting application tasks, such as aspect-based opinion summarization, personalized entity ranking and recommendation, and reviewer behavior analysis.", "title": "" }, { "docid": "6efe106949b3611a98608a8624c1ce22", "text": "W e analyze contracting behaviors in a two-tier supply chain system consisting of competing manufacturers and competing retailers. We contrast the contracting outcome of a Stackelberg game, in which the manufacturers offer take-itor-leave-it contracts to the retailers, with that of a bargaining game, in which the firms bilaterally negotiate contract terms via a process of alternating offers. The manufacturers in the Stackelberg game possess a Stackelberg-leader advantage in that the retailers are not entitled to make counteroffers. Our analysis suggests that whether this advantage would benefit the manufacturers depends on the contractual form. With simple contracts such as wholesale-price contracts, which generally do not allow one party to fully extract the trade surplus, the Stackelberg game replicates the boundary case of the bargaining game with the manufacturers possessing all the bargaining power. In contrast, with sophisticated contracts such as two-part tariffs, which enable full surplus extraction, the two games lead to distinct outcomes. We further show that the game structure being Stackelberg or bargaining critically affects firms’ preferences over contract types and thus their equilibrium contract choices. These observations suggest that the Stackelberg game may not be a sufficient device to predict contracting behaviors in reality where bargaining is commonly observed.", "title": "" }, { "docid": "1e4a74d8d4ae131467e12911fd6ac281", "text": "Google Scholar has been well received by the research community. Its promises of free, universal and easy access to scientific literature as well as the perception that it covers better than other traditional multidisciplinary databases the areas of the Social Sciences and the Humanities have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: two new bibliometric products that offer citation data at the individual level and at journal level. In this paper we show the results of a experiment undertaken to analyze Google Scholar's capacity to detect citation counting manipulation. For this, six documents were uploaded to an institutional web domain authored by a false researcher and referencing all the publications of the members of the EC3 research group at the University of Granada. The detection of Google Scholar of these papers outburst the citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such outburst and how it could affect the future development of such products not only at individual level but also at journal level, especially if Google Scholar persists with its lack of transparency.", "title": "" }, { "docid": "91597681b766844cc55deac76dbbf38a", "text": "Availability of large data sets like ImageNet and massively parallel computation support in modern HPC devices like NVIDIA GPUs have fueled a renewed interest in Deep Learning (DL) algorithms. This has triggered the development of DL frameworks like Caffe, Torch, TensorFlow, and CNTK. However, most DL frameworks have been limited to a single node. In order to scale out DL frameworks and bring HPC capabilities to the DL arena, we propose, S-Caffe; a scalable and distributed Caffe adaptation for modern multi-GPU clusters. With an in-depth analysis of new requirements brought forward by the DL frameworks and limitations of current communication runtimes, we present a co-design of the Caffe framework and the MVAPICH2-GDR MPI runtime. Using the co-design methodology, we modify Caffe's workflow to maximize the overlap of computation and communication with multi-stage data propagation and gradient aggregation schemes. We bring DL-Awareness to the MPI runtime by proposing a hierarchical reduction design that benefits from CUDA-Aware features and provides up to a massive 133x speedup over OpenMPI and 2.6x speedup over MVAPICH2 for 160 GPUs. S-Caffe successfully scales up to 160 K-80 GPUs for GoogLeNet (ImageNet) with a speedup of 2.5x over 32 GPUs. To the best of our knowledge, this is the first framework that scales up to 160 GPUs. Furthermore, even for single node training, S-Caffe shows an improvement of 14\\% and 9\\% over Nvidia's optimized Caffe for 8 and 16 GPUs, respectively. In addition, S-Caffe achieves up to 1395 samples per second for the AlexNet model, which is comparable to the performance of Microsoft CNTK.", "title": "" }, { "docid": "1a12992fe2e6ce238b420d657e739c18", "text": "In early 2018, the second edition of ISO 26262:2018[1] automotive functional safety standard, is due for release. At the time of writing, the draft international standard (DIS) version is out for comment and review. One significant change over the original version of the ISO 26262:2011[2] standard is part 11, which brings detailed information to support semiconductor manufacturers develop ISO 26262 compliant intellectual property (IP). In the original version, information available to semiconductor companies was limited, this forthcoming release will bring significantly more information to support semiconductor and silicon IP suppliers. In the areas of digital and analogue components, programmable logic devices (PLD), multi-core processors and sensors. Tips, recommendations and practical examples are illustrated. However, there are certain areas that still not well represented, diagnostic coverage for analogue components for example is not defined in detail and there is a shortage of supporting information. Part 11 could also provide more worked examples to give design and functional safety teams a better insight into estimation techniques. The final draft international standard (FDIS) is due for publication in autumn 2017, and certain aspects of part 11 will be enhanced.", "title": "" }, { "docid": "1203f22bfdfc9ecd211dbd79a2043a6a", "text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.", "title": "" }, { "docid": "7130731b6603e4be28e8503c185176f2", "text": "CAViAR is a mobile software system for indoor environments that provides to the mobile user equipped with a smartphone indoor localization, augmented reality (AR), visual interaction, and indoor navigation. These capabilities are possible with the availability of state of the art AR technologies. The mobile application includes additional features, such as indoor maps, shortest path, inertial navigation, places of interest, location sharing and voice-commanded search. CAViAR was tested in a University Campus as one of the technologies to be used later in an intelligent Campus environment.", "title": "" }, { "docid": "17c0ef52e8f4dade526bf56f158967ef", "text": "Consider a distributed computing setup consisting of a master node and n worker nodes, each equipped with p cores, and a function f (x) = g(f1(x), f2(x),…, fk(x)), where each fi can be computed independently of the rest. Assuming that the worker computational times have exponential tails, what is the minimum possible time for computing f? Can we use coding theory principles to speed up this distributed computation? In [1], it is shown that distributed computing of linear functions can be expedited by applying linear erasure codes. However, it is not clear if linear codes can speed up distributed computation of ‘nonlinear’ functions as well. To resolve this problem, we propose the use of sparse linear codes, exploiting the modern multicore processing architecture. We show that 1) our coding solution achieves the order optimal runtime, and 2) it is at least Θ(√log n) times faster than any uncoded schemes where the number of workers is n.", "title": "" }, { "docid": "96682d87e6f512728f4b54c3c7eb6d4b", "text": "Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications and achieved increasing success. However, recent studies show that such machine learning models appear to be vulnerable against adversarial examples. So far adversarial examples have been heavily explored for 2D images, while few works have conducted to understand vulnerabilities of 3D objects which exist in real world, where 3D objects are projected to 2D domains by photo taking for different learning (recognition) tasks. In this paper we consider adversarial behaviors in practical scenarios by manipulating the shape and texture of a given 3D mesh representation of an object. Our goal is to project the optimized “adversarial meshes\" to 2D with a photorealistic renderer, and still able to mislead different machine learning models. Extensive experiments show that by generating unnoticeable 3D adversarial perturbation on shape or texture for a 3D mesh, the corresponding projected 2D instance can either lead classifiers to misclassify the victim object as an arbitrary malicious target, or hide any target object within the scene from object detectors. We conduct human studies to show that our optimized adversarial 3D perturbation is highly unnoticeable for human vision systems. In addition to the subtle perturbation for a given 3D mesh, we also propose to synthesize a realistic 3D mesh and put in a scene mimicking similar rendering conditions and therefore attack different machine learning models. In-depth analysis of transferability among various 3D renderers and vulnerable regions of meshes are provided to help better understand adversarial behaviors in real-world.", "title": "" }, { "docid": "133f944437ebedbe9dc3bb1a1e725d88", "text": "S100A7 (psoriasin), an EF-hand type calcium binding protein localized in epithelial cells, regulates cell proliferation and differentiation. An S100A7 overexpression may occur in response to inflammatory stimuli, such in psoriasis, a chronic inflammatory autoimmune-mediated skin disease. Increasing evidence suggests that S100A7 plays critical roles in amplifying the inflammatory process in psoriatic skin, perpetuating the disease phenotype. This review will discuss the interactions between S100A7 and cytokines in psoriatic skin. Furthermore, we will focus our discussion on regulation and functions of S100A7 in psoriasis. Finally, we will discuss the possible use of S100A7 as therapeutic target in psoriasis.", "title": "" }, { "docid": "e0682efd9c8807411da832b796b47da2", "text": "The rise of cloud computing is radically changing the way enterprises manage their information technology (IT) assets. Considering the benefits of cloud computing to the information technology sector, we present a review of current research initiatives and applications of the cloud computing paradigm related to product design and manufacturing. In particular, we focus on exploring the potential of utilizing cloud computing for selected aspects of collaborative design, distributed manufacturing, collective innovation, data mining, semantic web technology, and virtualization. In addition, we propose to expand the paradigm of cloud computing to the field of computer-aided design and manufacturing and propose a new concept of cloud-based design and manufacturing (CBDM). Specifically, we (1) propose a comprehensive definition of CBDM; (2) discuss its key characteristics; (3) relate current research in design and manufacture to CBDM; and (4) identify key research issues and future trends. 1", "title": "" }, { "docid": "27474721ff0e01f17cef8d5089f42354", "text": "In this paper, we propose a secondary consensus-based control layer for current sharing and voltage balancing in DC microGrids (mGs). Differently from existing approaches based on droop control, we assume decentralized Plug-and-Play (PnP) controllers at the primary level as they provide voltage stabilization and their design complexity is independent of the mG size. We analyze the behavior of the closed-loop mG by approximating local primary control loops with either unitary gains or first-order transfer functions. Besides proving stability, current sharing and voltage balancing in the asymptotic régime, we describe how to design secondary controllers in a PnP fashion when distributed generation units are added or removed. Theoretical results are complemented by simulations using a 5-DGUs mG implemented in Simulink/PLECS.", "title": "" }, { "docid": "131862b294936c95b8dba851b38c86fa", "text": "In this paper, we revisit the Lagrangian accumulation process that aggregates the local attribute information along integral curves for vector field visualization. Similar to the previous work, we adopt the notation of the Lagrangian accumulation field or A field for the representation of the accumulation results. In contrast to the previous work, we provide a more in-depth discussion on the properties of A fields and the meaning of the patterns exhibiting in A fields. In particular, we revisit the discontinuity in the A fields and provide a thorough explanation of its relation to the flow structure and the additional information of the flow that it may reveal. In addition, other remaining questions about the A field, such as its sensitivity to the selection of integration time, are also addressed. Based on these new insights, we demonstrate a number of enhanced flow visualizations aided by the accumulation framework and the A fields, including a new A field guided ribbon placement, a A field guided stream surface seeding and the visualization of particle-based flow data. To further demonstrate the generality of the accumulation framework, we extend it to the non-integral geometric curves (i.e. streak lines), which enables us to reveal information of the flow behavior other than those revealed by the integral curves. Finally, we introduce the Eulerian accumulation, which can reveal different flow behavior information from those revealed by the Lagrangian accumulation. In summary, we believe the Lagrangian accumulation and the resulting A fields offer a valuable way for the exploration of flow behaviors in addition to the current state-of-the-art techniques. c © 2017 Elsevier B. V. All rights reserved.", "title": "" }, { "docid": "04609d7cd9809e16f8dc81cc142b42ec", "text": "Cloud computing provides a lot of shareable resources payable on demand to the users. The drawback with cloud computing is the security challenges since the data in the cloud are managed by third party. Steganography and cryptography are some of the security measures applied in the cloud to secure user data. The objective of steganography is to hide the existence of communication from the unintended users whereas cryptography does provide security to user data to be transferred in the cloud. Since users pay for the services utilize in the cloud, the need to evaluate the performance of the algorithms used in the cloud to secure user data in order to know the resource consumed by such algorithms such as storage memory, network bandwidth, computing power, encryption and decryption time becomes imperative. In this work, we implemented and evaluated the performance of Text steganography and RSA algorithm and Image steganography and RSA as Digital signature considering four test cases. The simulation results show that, image steganography with RSA as digital signature performs better than text steganography and RSA algorithm. The performance differences between the two algorithms are 10.76, 9.93, 10.53 and 10.53 seconds for encryption time, 60.68, 40.94, 40.9, and 41.85 seconds for decryption time, 8.1, 10.92, 15.2 and 5.17 mb for memory used when hiding data, 5.3, 1.95 and 17.18 mb for memory used when extracting data, 0.93, 1.04, 1.36 and 3.76 mb for bandwidth used, 75.75, 36.2, 36.9 and 37.45 kwh for processing power used when hiding and extracting data respectively. Except in test case2 where Text steganography and RSA algorithm perform better than Image Steganography and RSA as Digital Signature in terms of memory used when extracting data with performance difference of -5.09 mb because of the bit size of the image data when extracted. This research work recommend the use of image steganography and RSA as digital signature to cloud service providers and users since it can secure major data types such as text, image, audio and video used in the cloud and consume less system resources.", "title": "" }, { "docid": "0210a0cd8c530dd181bbae1a5bdd9b1a", "text": "Most of the social media platforms generate a massive amount of raw data that is slow-paced. On the other hand, Internet Relay Chat (IRC) protocol, which has been extensively used by hacker community to discuss and share their knowledge, facilitates fast-paced and real-time text communications. Previous studies of malicious IRC behavior analysis were mostly either offline or batch processing. This results in a long response time for data collection, pre-processing, and threat detection. However, since the threats can use the latest vulnerabilities to exploit systems (e.g. zero-day attack) and which can spread fast using IRC channels. Current IRC channel monitoring techniques cannot provide the required fast detection and alerting. In this paper, we present an alternative approach to overcome this limitation by providing real-time and autonomic threat detection in IRC channels. We demonstrate the capabilities of our approach using as an example the shadow brokers' leak exploit (the exploit leveraged by WannaCry ransomware attack) that was captured and detected by our framework.", "title": "" } ]
scidocsrr
4c8cb4eddd646456f86bfeff298566c1
Empirical Analysis on Hotel Online Booking Consumer's Satisfaction with E-service of Website
[ { "docid": "b44600830a6aacd0a1b7ec199cba5859", "text": "Existing e-service quality scales mainly focus on goal-oriented e-shopping behavior excluding hedonic quality aspects. As a consequence, these scales do not fully cover all aspects of consumer's quality evaluation. In order to integrate both utilitarian and hedonic e-service quality elements, we apply a transaction process model to electronic service encounters. Based on this general framework capturing all stages of the electronic service delivery process, we develop a transaction process-based scale for measuring service quality (eTransQual). After conducting exploratory and confirmatory factor analysis, we identify five discriminant quality dimensions: functionality/design, enjoyment, process, reliability and responsiveness. All extracted dimensions of eTransQual show a significant positive impact on important outcome variables like perceived value and customer satisfaction. Moreover, enjoyment is a dominant factor in influencing both relationship duration and repurchase intention as major drivers of customer lifetime value. As a result, we present conceptual and empirical evidence for the need to integrate both utilitarian and hedonic e-service quality elements into one measurement scale. © 2006 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "34ff8cd119a77057ccfc0ee682dfc0ac", "text": "A variety of real-world processes (over networks) produce sequences of data whose complex temporal dynamics need to be studied. More especially, the event timestamps can carry important information about the underlying network dynamics, which otherwise are not available from the time-series evenly sampled from continuous signals. Moreover, in most complex processes, event sequences and evenly-sampled times series data can interact with each other, which renders joint modeling of those two sources of data necessary. To tackle the above problems, in this paper, we utilize the rich framework of (temporal) point processes to model event data and timely update its intensity function by the synergic twin Recurrent Neural Networks (RNNs). In the proposed architecture, the intensity function is synergistically modulated by one RNN with asynchronous events as input and another RNN with time series as input. Furthermore, to enhance the interpretability of the model, the attention mechanism for the neural point process is introduced. The whole model with event type and timestamp prediction output layers can be trained end-to-end and allows a black-box treatment for modeling the intensity. We substantiate the superiority of our model in synthetic data and three real-world benchmark datasets.", "title": "" }, { "docid": "eea57066c7cd0b778188c2407c8365f3", "text": "For over two decades, video streaming over the Internet has received a substantial amount of attention from both academia and industry. Starting from the design of transport protocols for streaming video, research interests have later shifted to the peer-to-peer paradigm of designing streaming protocols at the application layer. More recent research has focused on building more practical and scalable systems, using Dynamic Adaptive Streaming over HTTP. In this article, we provide a retrospective view of the research results over the past two decades, with a focus on peer-to-peer streaming protocols and the effects of cloud computing and social media.", "title": "" }, { "docid": "47e9515f703c840c38ab0c3095f48a3a", "text": "Hnefatafl is an ancient Norse game - an ancestor of chess. In this paper, we report on the development of computer players for this game. In the spirit of Blondie24, we evolve neural networks as board evaluation functions for different versions of the game. An unusual aspect of this game is that there is no general agreement on the rules: it is no longer much played, and game historians attempt to infer the rules from scraps of historical texts, with ambiguities often resolved on gut feeling as to what the rules must have been in order to achieve a balanced game. We offer the evolutionary method as a means by which to judge the merits of alternative rule sets", "title": "" }, { "docid": "2059db0707ffc28fd62b7387ba6d09ae", "text": "Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems.", "title": "" }, { "docid": "c7ea816f2bb838b8c5aac3cdbbd82360", "text": "Semantic annotated parallel corpora, though rare, play an increasingly important role in natural language processing. These corpora provide valuable data for computational tasks like sense-based machine translation and word sense disambiguation, but also to contrastive linguistics and translation studies. In this paper we present the ongoing development of a web-based corpus semantic annotation environment that uses the Open Multilingual Wordnet (Bond and Foster, 2013) as a sense inventory. The system includes interfaces to help coordinating the annotation project and a corpus browsing interface designed specifically to meet the needs of a semantically annotated corpus. The tool was designed to build the NTU-Multilingual Corpus (Tan and Bond, 2012). For the past six years, our tools have been tested and developed in parallel with the semantic annotation of a portion of this corpus in Chinese, English, Japanese and Indonesian. The annotation system is released under an open source license (MIT).", "title": "" }, { "docid": "eced014d1a6b3b20ab41172be3de3518", "text": "Driving intention recognition and trajectory prediction of moving vehicles are two important requirements of future advanced driver assistance systems (ADAS) for urban intersections. In this paper, we present a consistent framework for solving these two problems. The key idea is to model the spatio-temporal dependencies of traffic situations with a two-dimensional Gaussian process regression. With this representation the driving intention can be recognized by evaluating the data likelihood for each individual regression model. For the trajectory prediction purpose, we transform these regression models into the corresponding dynamical models and combine them with Unscented Kalman Filters (UKF) to overcome the non-linear issue. We evaluate our framework with data collected from real traffic scenarios and show that our approach can be used for recognition of different driving intentions and for long-term trajectory prediction of traffic situations occurring at urban intersections.", "title": "" }, { "docid": "36162ebd7d7c5418e4c78bad5bbba8ab", "text": "In this paper we discuss the design of human-robot interaction focussing especially on social robot communication and multimodal information presentation. As a starting point we use the WikiTalk application, an open-domain conversational system which has been previously developed using a robotics simulator. We describe how it can be implemented on the Nao robot platform, enabling Nao to make informative spoken contributions on a wide range of topics during conversation. Spoken interaction is further combined with gesturing in order to support Nao’s presentation by natural multimodal capabilities, and to enhance and explore natural communication between human users and robots.", "title": "" }, { "docid": "015da67991b6480433f889bd597abdb4", "text": "Nowadays the requirement for developing a wheel chair control which is useful for the physically disabled person with Tetraplegia. This system involves the control of the wheel chair with the eye moment of the affected person. Statistics suggest that there are 230,000 cases of Tetraplegia in India. Our system here is to develop a wheelchair which make the lives of these people easier and instigate confidence to live in them. We know that a person who is affected by Tetraplegia can move their eyes alone to a certain extent which paves the idea for the development of our system. Here we have proposed the method for a device where a patient placed on the wheel chair looking in a straight line at the camera which is permanently fixed in the optics, is capable to move in a track by gazing in that way. When we change the direction, the camera signals are given using the mat lab script to the microcontroller. Depends on the path of the eye, the microcontroller controls the wheel chair in all direction and stops the movement. If there is any obstacle to be found before the wheel chair the sensor mind that and it stop and move in right direction immediately. The benefit of this system is too easily travel anywhere in any direction which is handled by physically disabled person with Tetraplegia.", "title": "" }, { "docid": "ebb01a778c668ef7b439875eaa5682ac", "text": "In this paper, we present a large scale off-line handwritten Chinese character database-HCL2000 which will be made public available for the research community. The database contains 3,755 frequently used simplified Chinesecharacters written by 1,000 different subjects. The writers’ information is incorporated in the database to facilitate testing on grouping writers with different background such as age, occupation, gender, and education etc. We investigate some characteristics of writing styles from different groups of writers. We evaluate HCL2000 database using three different algorithms as a baseline. We decide to publish the database along with this paper and make it free for a research purpose.", "title": "" }, { "docid": "83e3ce2b70e1f06073fd0a476bf04ff7", "text": "Each year, a number of natural disasters strike across the globe, killing hundreds and causing billions of dollars in property and infrastructure damage. Minimizing the impact of disasters is imperative in today's society. As the capabilities of software and hardware evolve, so does the role of information and communication technology in disaster mitigation, preparation, response, and recovery. A large quantity of disaster-related data is available, including response plans, records of previous incidents, simulation data, social media data, and Web sites. However, current data management solutions offer few or no integration capabilities. Moreover, recent advances in cloud computing, big data, and NoSQL open the door for new solutions in disaster data management. In this paper, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM), with the objectives of 1) storing large amounts of disaster-related data from diverse sources, 2) facilitating search, and 3) supporting their interoperability and integration. Data are stored in a cloud environment using a combination of relational and NoSQL databases. The case study presented in this paper illustrates the use of Disaster-CDM on an example of simulation models.", "title": "" }, { "docid": "18c56e9d096ba4ea48a0579626f83edc", "text": "PURPOSE\nThe purpose of this study was to provide an overview of platelet-rich plasma (PRP) injected into the scalp for the management of androgenic alopecia.\n\n\nMATERIALS AND METHODS\nA literature review was performed to evaluate the benefits of PRP in androgenic alopecia.\n\n\nRESULTS\nHair restoration has been increasing. PRP's main components of platelet-derived growth factor, transforming growth factor, and vascular endothelial growth factor have the potential to stimulate hard and soft tissue wound healing. In general, PRP showed a benefit on patients with androgenic alopecia, including increased hair density and quality. Currently, different PRP preparations are being used with no standard technique.\n\n\nCONCLUSION\nThis review found beneficial effects of PRP on androgenic alopecia. However, more rigorous study designs, including larger samples, quantitative measurements of effect, and longer follow-up periods, are needed to solidify the utility of PRP for treating patients with androgenic alopecia.", "title": "" }, { "docid": "3bda0519ec7f61a4778cddfaa0c9b12d", "text": "Recommender systems are assisting users in the process of identifying items that fulfill their wishes and needs. These systems are successfully applied in different e-commerce settings, for example, to the recommendation of news, movies, music, books, and digital cameras. The major goal of this book chapter is to discuss new and upcoming applications of recommendation technologies and to provide an outlook on major characteristics of future technological developments. Based on a literature analysis, we discuss new and upcoming applications in domains such as software engineering, data & knowledge engineering, configurable items, and persuasive technologies. Thereafter we sketch major properties of the next generation of recommendation technologies.", "title": "" }, { "docid": "bc121dff9e8e0e8a48c3bbda3417f32b", "text": "This report reflects, from a software engineering perspective, on the experience of designing and implementing protection mechanisms for ASP.NET Web services. The limitations of Microsoft ASP.NET container security mechanisms render them inadequate for hosting enterprise-scale applications that have to be protected according to diverse and/or complex application-specific security policies. In this paper we report on our experience of designing and implementing a component-based architecture for protecting enterprise-grade Web service applications hosted by ASP.NET. Due to its flexibility and extensibility, this architecture enables the integration of ASP.NET into the organizational security infrastructure with less effort by Web service developers. The architecture has been implemented in a real-world security solution. This paper also contributes a best practice on constructing flexible and extensible authentication and authorization logic for Web services by using Resource Access Decision and Attribute Function (AF) architectural styles. Furthermore, the lessons learned from our design and implementation experiences are discussed throughout the paper.", "title": "" }, { "docid": "5f21152914659d5aa146590d81522177", "text": "For applications such as Amazon warehouse order fulfillment, robots must grasp a desired object amid clutter: other objects that block direct access. This can be difficult to program explicitly due to uncertainty in friction and push mechanics and the variety of objects that can be encountered. Deep Learning networks combined with Online Learning from Demonstration (LfD) algorithms such as DAgger and SHIV have potential to learn robot control policies for such tasks where the input is a camera image and system dynamics and the cost function are unknown. To explore this idea, we introduce a version of the grasping in clutter problem where a yellow cylinder must be grasped by a planar robot arm amid extruded objects in a variety of shapes and positions. To reduce the burden on human experts to provide demonstrations, we propose using a hierarchy of three levels of supervisors: a fast motion planner that ignores obstacles, crowd-sourced human workers who provide appropriate robot control values remotely via online videos, and a local human expert. Physical experiments suggest that with 160 expert demonstrations, using the hierarchy of supervisors can increase the probability of a successful grasp (reliability) from 55% to 90%.", "title": "" }, { "docid": "aaa7983870f3861eb7c3fc1e81555a89", "text": "This correspondence examines Tomlinson–Harashima precoding (THP) on discrete-time channels having intersymbol interference and additive white Gaussian noise. An exact expression for the maximum achievable information rate of zero-forcing (ZF) THP is derived as a function of the channel impulse response, the input power constraint, and the additive white Gaussian noise variance. Information rate bounds are provided for the minimum mean-square error (MMSE) THP. The performance of ZF-THP and MMSE-THP relative to each other and to channel capacity is explored in general and for some example channels. The importance of symbol rate to ZF-THP performance is demonstrated.", "title": "" }, { "docid": "6aaee9f90e64755c0b8b1306972df748", "text": "Combining information from various data sources has become an important research topic in machine learning with many scientific applications. Most previous studies employ kernels or graphs to integrate different types of features, which routinely assume one weight for one type of features. However, for many problems, the importance of features in one source to an individual cluster of data can be varied, which makes the previous approaches ineffective. In this paper, we propose a novel multi-view learning model to integrate all features and learn the weight for every feature with respect to each cluster individually via new joint structured sparsity-inducing norms. The proposed multi-view learning framework allows us not only to perform clustering tasks, but also to deal with classification tasks by an extension when the labeling knowledge is available. A new efficient algorithm is derived to solve the formulated objective with rigorous theoretical proof on its convergence. We applied our new data fusion method to five broadly used multi-view data sets for both clustering and classification. In all experimental results, our method clearly outperforms other related state-of-the-art methods.", "title": "" }, { "docid": "f6be55027aa2e3b3af81f22eee84ada2", "text": "Face sketch is the main approach to find suspect in law enforcement, especially in many cases when facial attribute descriptions of suspects by witnesses are available. Face sketch synthesized from facial attribute text can also be used in sketch based face recognition. While most previous work focus on face photo to sketch synthesis, the problem of sketch synthesis with facial attribute text has not been explored yet. The problem is challenging due to two facts: firstly, no database of face attribute text to sketch is available; secondly, it is hard to synthesize high-quality face sketches due to the ambiguity and complexity of text description. In this paper, we propose a face sketch synthesis approach with text using Stagewise-GAN. Our contributions lie in two aspects: 1) we construct the first text to face sketch database. The database, namely Text2Sketch dataset, is annotated with CUFSF dataset of 1194 sketches. For each sketch, an attribute description is labelled; 2) we synthesize vivid face sketches using Stagewise-GAN. We use user study, face retrieval performance with synthesized sketch, and quantitative results for evaluation. Experimental results show the effectiveness of our approach.", "title": "" }, { "docid": "95a376ec68ac3c4bd6b0fd236dca5bcd", "text": "Long-term suppression of postprandial glucose concentration is an important dietary strategy for the prevention and treatment of type 2 diabetes. Because previous reports have suggested that seaweed may exert anti-diabetic effects in animals, the effects of Wakame or Mekabu intake with 200 g white rice, 50 g boiled soybeans, 60 g potatoes, and 40 g broccoli on postprandial glucose, insulin and free fatty acid levels were investigated in healthy subjects. Plasma glucose levels at 30 min and glucose area under the curve (AUC) at 0-30 min after the Mekabu meal were significantly lower than that after the control meal. Plasma glucose and glucose AUC were not different between the Wakame and control meals. Postprandial serum insulin and its AUC and free fatty acid concentration were not different among the three meals. In addition, fullness, satisfaction, and wellness scores were not different among the three meals. Thus, consumption of 70 g Mekabu with a white rice-based breakfast reduces postprandial glucose concentration.", "title": "" }, { "docid": "3acc4d7100331b56fa244bd618373a56", "text": "Although deep neural networks (DNNs) have achieved great success in many tasks, recent studies have shown they are vulnerable to adversarial examples. Such examples, typically generated by adding small but purposeful distortions, can frequently fool DNN models. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or suffered from expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model’s prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives. This paper explores two types of feature squeezing: reducing the color bit depth of each pixel and spatial smoothing. These strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.", "title": "" }, { "docid": "531d9a569e308748ab9160b95987ff89", "text": "In recent years, there has been increased interest in real-world event summarization using publicly accessible data made available through social networking services such as Twitter and Facebook. People use these outlets to communicate with others, express their opinion and commentate on a wide variety of real-world events. Due to the heterogeneity, the sheer volume of text and the fact that some messages are more informative than others, automatic summarization is a very challenging task. This paper presents three techniques for summarizing microblog documents by selecting the most representative posts for real-world events (clusters). In particular, we tackle the task of multilingual summarization in Twitter. We evaluate the generated summaries by comparing them to both human produced summaries and to the summarization results of similar leading summarization systems. Our results show that our proposed Temporal TF-IDF method outperforms all the other summarization systems for both the English and non-English corpora as they lead to informative summaries.", "title": "" } ]
scidocsrr
1c6a0d932f44ebe74a935fe66f3c5b0d
Improved Iris Recognition through Fusion of Hamming Distance and Fragile Bit Distance
[ { "docid": "f0d906563c13da83cbe57b9186c53524", "text": "In this paper, we propose a fast search algorithm for a large fuzzy database that stores iris codes or data with a similar binary structure. The fuzzy nature of iris codes and their high dimensionality render many modern search algorithms, mainly relying on sorting and hashing, inadequate. The algorithm that is used in all current public deployments of iris recognition is based on a brute force exhaustive search through a database of iris codes, looking for a match that is close enough. Our new technique, Beacon Guided Search (BGS), tackles this problem by dispersing a multitude of ldquobeaconsrdquo in the search space. Despite random bit errors, iris codes from the same eye are more likely to collide with the same beacons than those from different eyes. By counting the number of collisions, BGS shrinks the search range dramatically with a negligible loss of precision. We evaluate this technique using 632,500 iris codes enrolled in the United Arab Emirates (UAE) border control system, showing a substantial improvement in search speed with a negligible loss of accuracy. In addition, we demonstrate that the empirical results match theoretical predictions.", "title": "" } ]
[ { "docid": "dd792223589de1a8c0ad7bea9e52f05b", "text": "Query Optimization is expected to produce good execution plans for complex queries while taking relatively small optimization time. Moreover, it is expected to pick the execution plans with rather limited knowledge of data and without any additional input from the application. We argue that it is worth rethinking this prevalent model of the optimizer. Specifically, we discuss how the optimizer may benefit from leveraging rich usage data and from application input. We conclude with a call to action to further advance query optimization technology.", "title": "" }, { "docid": "66467834745400a89d6ffb21cf8906ec", "text": "People approach pleasure and avoid pain. To discover the true nature of approach-avoidance motivation, psychologists need to move beyond this hedonic principle to the principles that underlie the different ways that it operates. One such principle is regulatory focus, which distinguishes self-regulation with a promotion focus (accomplishments and aspirations) from self-regulation with a prevention focus (safety and responsibilities). This principle is used to reconsider the fundamental nature of approach-avoidance, expectancy-value relations, and emotional and evaluative sensitivities. Both types of regulatory focus are applied to phenomena that have been treated in terms of either promotion (e.g., well-being) or prevention (e.g., cognitive dissonance). Then, regulatory focus is distinguished from regulatory anticipation and regulatory reference, 2 other principles underlying the different ways that people approach pleasure and avoid pain.", "title": "" }, { "docid": "f8002d9ba52abe257fe7cfe357c844f7", "text": "The t-distributed Stochastic Neighbor Embedding (tSNE) algorithm has become in recent years one of the most used and insightful techniques for the exploratory data analysis of high-dimensional data. tSNE reveals clusters of high-dimensional data points at different scales while it requires only minimal tuning of its parameters. Despite these advantages, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of tSNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the tSNE embedding for large datasets. In this work, we present a novel approach to the minimization of the tSNE objective function that heavily relies on modern graphics hardware and has linear computational complexity. Our technique does not only beat the state of the art, but can even be executed on the client side in a browser. We propose to approximate the repulsion forces between data points using adaptive-resolution textures that are drawn at every iteration with WebGL. This approximation allows us to reformulate the tSNE minimization problem as a series of tensor operation that are computed with TensorFlow.js, a JavaScript library for scalable tensor computations.", "title": "" }, { "docid": "b9f665d7fe28d6abce0f429ed5a319ab", "text": "■ Abstract The enzyme lactase that is located in the villus enterocytes of the small intestine is responsible for digestion of lactose in milk. Lactase activity is high and vital during infancy, but in most mammals, including most humans, lactase activity declines after the weaning phase. In other healthy humans, lactase activity persists at a high level throughout adult life, enabling them to digest lactose as adults. This dominantly inherited genetic trait is known as lactase persistence. The distribution of these different lactase phenotypes in human populations is highly variable and is controlled by a polymorphic element cis-acting to the lactase gene. A putative causal nucleotide change has been identified and occurs on the background of a very extended haplotype that is frequent in Northern Europeans, where lactase persistence is frequent. This single nucleotide polymorphism is located 14 kb upstream from the start of transcription of lactase in an intron of the adjacent gene MCM6. This change does not, however, explain all the variation in lactase expression.", "title": "" }, { "docid": "503ccd79172e5b8b3cc3a26cf0d1b485", "text": "The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360◦ full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image-based object detector, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.", "title": "" }, { "docid": "8284163c893d79213b6573249a0f0a32", "text": "Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.", "title": "" }, { "docid": "992667ba81d478a02876b3c4934aeb31", "text": "Inspired by the exoskeletons of insects, we have developed a number of manufacturing methods for the fabrication of structures for attachment, protection, and sensing. This manufacturing paradigm is based on infrared laser machining of lamina and the bonding of layered structures. The structures have been integrated with an inexpensive palm-sized legged robot, the VelociRoACH [Haldane et al., 2013, “Animal-Inspired Design and Aerodynamic Stabilization of a Hexapedal Millirobot,” IEEE/RSJ International Conference on Robotics and Automation, Karlsruhe, Germany, May 6–10, pp. 3279–3286]. We also present a methodology to design and fabricate folded robotic mechanisms, and have released an open-source robot, the OpenRoACH, as an example implementation of these techniques. We present new composite materials which enable the fabrication of stronger, larger scale smart composite microstructures (SCM) robots. We demonstrate how thermoforming can be used to manufacture protective structures resistant to water and capable of withstanding terminal velocity falls. A simple way to manufacture traction enhancing claws is demonstrated. An electronics layer can be incorporated into the robot structure, enabling the integration of distributed sensing. We present fabrication methods for binary and analog force sensing arrays, as well as a carbon nanotube (CNT) based strain sensor which can be fabricated in place. The presented manufacturing methods take advantage of low-cost, high accuracy two-dimensional fabrication processes which will enable low-cost mass production of robots integrated with mechanical linkages, an exoskeleton, and body and limb sensing. [DOI: 10.1115/1.4029495]", "title": "" }, { "docid": "d4820344d9c229ac15d002b667c07084", "text": "In this paper, we propose to integrate semantic similarity assessment in an edit distance algorithm, seeking to amend similarity judgments when comparing XML-based legal documents[3].", "title": "" }, { "docid": "37b3447959579cf5cf5e617417e3b575", "text": "BACKGROUND\nPosttraumatic osteoarthritis (PTOA) after anterior cruciate ligament (ACL) reconstruction ultimately translates into a large economic effect on the health care system owing to the young ages of this population. Purpose/Hypothesis: The purposes were to perform a meta-analysis to determine the prevalence of osteoarthritis after an ACL reconstruction, examining the effects of length of time after surgery, preoperative time interval from injury to surgery, and patient age at the time of surgery. It was hypothesized that the prevalence of PTOA increased with time from surgery and that increased time from injury to surgery and age were also risk factors for the development of PTOA.\n\n\nSTUDY DESIGN\nMeta-analysis.\n\n\nMETHODS\nA meta-analysis of the prevalence of radiographic PTOA after ACL reconstruction was performed of studies with a minimum of 5 years' follow-up, with a level of evidence of 1, 2, or 3. The presence of osteoarthritis was defined according to knee radiographs evaluated with classification based on Kellgren and Lawrence, Ahlbäck, International Knee Documentation Committee, or the Osteoarthritis Research Society International. Metaregression models quantified the relationship between radiographic PTOA prevalence and the mean time from injury to surgery, mean patient age at time of surgery, and mean postoperative follow-up time.\n\n\nRESULTS\nThirty-eight studies (4108 patients) were included. Longer postsurgical follow-up time was significantly positively associated with a higher proportion of PTOA development. The model-estimated proportion of PTOA (95% CI) at 5, 10, and 20 years after surgery was 11.3% (6.4%-19.1%), 20.6% (14.9%-27.7%), and 51.6% (29.1%-73.5%), respectively. Increased chronicity of the ACL tear before surgery and increased patient age were also associated with a higher likelihood of PTOA development.\n\n\nCONCLUSION\nThe prevalence of osteoarthritis after an ACL reconstruction significantly increased with time. Longer chronicity of ACL tear and older age at the time of surgery were significantly positively correlated with the development of osteoarthritis. A timely referral and treatment of symptomatic patients are vital to diminish the occurrence of PTOA.", "title": "" }, { "docid": "3cc0218ffbdb04ee37c20138c1b56f3f", "text": "Many kinds of communication networks, in particular social and opportunistic networks, rely at least partly on on humans to help move data across the network. Human altruistic behavior is an important factor determining the feasibility of such a system. In this paper, we study the impact of different distributions of altruism on the throughput and delay of mobile social communication system. We evaluate the system performance using four experimental human mobility traces with uniform and community-biased traffic patterns. We found that mobile social networks are very robust to the distributions of altruism due to the nature of multiple paths. We further confirm the results by simulations on two popular social network models. To the best of our knowledge, this is the first complete study of the impact of altruism on mobile social networks, including the impact of topologies and traffic patterns.", "title": "" }, { "docid": "3a53831731ec16edf54877c610ae4384", "text": "We propose a position-based approach for largescale simulations of rigid bodies at interactive frame-rates. Our method solves positional constraints between rigid bodies and therefore integrates nicely with other position-based methods. Interaction of particles and rigid bodies through common constraints enables two-way coupling with deformables. The method exhibits exceptional performance and stability while being user-controllable and easy to implement. Various results demonstrate the practicability of our method for the resolution of collisions, contacts, stacking and joint constraints.", "title": "" }, { "docid": "ffc09744f2668e52ce84ac28887fd5fe", "text": "As the number of research papers available on the Web has increased enormously over the years, paper recommender systems have been proposed to help researchers on automatically finding works of interest. The main problem with the current approaches is that they assume that recommending algorithms are provided with a rich set of evidence (e.g., document collections, citations, profiles) which is normally not widely available. In this paper we propose a novel source independent framework for research paper recommendation. The framework requires as input only a single research paper and generates several potential queries by using terms in that paper, which are then submitted to existing Web information sources that hold research papers. Once a set of candidate papers for recommendation is generated, the framework applies content-based recommending algorithms to rank the candidates in order to recommend the ones most related to the input paper. This is done by using only publicly available metadata (i.e., title and abstract). We evaluate our proposed framework by performing an extensive experimentation in which we analyzed several strategies for query generation and several ranking strategies for paper recommendation. Our results show that good recommendations can be obtained with simple and low cost strategies.", "title": "" }, { "docid": "29b0d0737493b50cbcec8c4cecc76f5b", "text": "The author first provides an overview of computational intelligence and AI in games. Then he describes the new IEEE Transactions, which will publish archival quality original papers in all aspects of computational intelligence and AI related to all types of games. To name some examples, these include computer and video games, board games, card games, mathematical games, games that model economies or societies, serious games with educational and training applications, and games involving physical objects such as robot football and robotic car racing. Emphasis will also be placed on the use of these methods to improve performance in, and understanding of, the dynamics of games, as well as gaining insight into the properties of the methods as applied to games. It will also include using games as a platform for building intelligent embedded agents for real-world applications. The journal builds on a scientific community that has already been active in recent years with the development of new conference series such as the IEEE Symposium on Computational Intelligence in Games (CIG) and Artificial Intelligence and Interactive Digital Entertainment (AIIDE), as well as special issues on games in journals such as the IEEE Transactions on Evolutionary Computation. When setting up the journal, a decision was made to include both artificial intelligence (AI) and computational intelligence (CI) in the title. AI seeks to simulate intelligent behavior in any way that can be programmed effectively. Some see the field of AI as being all-inclusive, while others argue that there is nothing artificial about real intelligence as exhibited by higher mammals.", "title": "" }, { "docid": "cfff07dbbc363a3e64b94648e19f2e4b", "text": "Nitrogen (N) starvation and excess have distinct effects on N uptake and metabolism in poplars, but the global transcriptomic changes underlying morphological and physiological acclimation to altered N availability are unknown. We found that N starvation stimulated the fine root length and surface area by 54 and 49%, respectively, decreased the net photosynthetic rate by 15% and reduced the concentrations of NH4+, NO3(-) and total free amino acids in the roots and leaves of Populus simonii Carr. in comparison with normal N supply, whereas N excess had the opposite effect in most cases. Global transcriptome analysis of roots and leaves elucidated the specific molecular responses to N starvation and excess. Under N starvation and excess, gene ontology (GO) terms related to ion transport and response to auxin stimulus were enriched in roots, whereas the GO term for response to abscisic acid stimulus was overrepresented in leaves. Common GO terms for all N treatments in roots and leaves were related to development, N metabolism, response to stress and hormone stimulus. Approximately 30-40% of the differentially expressed genes formed a transcriptomic regulatory network under each condition. These results suggest that global transcriptomic reprogramming plays a key role in the morphological and physiological acclimation of poplar roots and leaves to N starvation and excess.", "title": "" }, { "docid": "8e1c820f4981b5ef8b8ec25be25d2ecc", "text": "As one of the most basic photo manipulation processes, photo cropping is widely used in the printing, graphic design, and photography industries. In this paper, we introduce graphlets (i.e., small connected subgraphs) to represent a photo's aesthetic features, and propose a probabilistic model to transfer aesthetic features from the training photo onto the cropped photo. In particular, by segmenting each photo into a set of regions, we construct a region adjacency graph (RAG) to represent the global aesthetic feature of each photo. Graphlets are then extracted from the RAGs, and these graphlets capture the local aesthetic features of the photos. Finally, we cast photo cropping as a candidate-searching procedure on the basis of a probabilistic model, and infer the parameters of the cropped photos using Gibbs sampling. The proposed method is fully automatic. Subjective evaluations have shown that it is preferred over a number of existing approaches.", "title": "" }, { "docid": "b54ca99ae8818517d5c04100bad0f3b4", "text": "Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved 0 norm. In this paper, a special type of tensor complementarity problems with Z -tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxednonconvexpolynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify. This research was supported by the National Natural Science Foundation of China (11301022, 11431002), the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20, RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913 and 15302114). B Ziyan Luo starkeynature@hotmail.com Liqun Qi liqun.qi@polyu.edu.hk Naihua Xiu nhxiu@bjtu.edu.cn 1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, People’s Repubic of China 2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, People’s Repubic of China 3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing, People’s Repubic of China 123 Author's personal copy", "title": "" }, { "docid": "9d803b0ce1f1af621466b1d7f97b7edf", "text": "This research paper addresses the methodology and approaches to managing criminal computer forensic investigations in a law enforcement environment with management controls, operational controls, and technical controls. Management controls cover policy and standard operating procedures (SOP's), methodology, and guidance. Operational controls cover SOP requirements, seizing evidence, evidence handling, best practices, and education, training and awareness. Technical controls cover acquisition and analysis procedures, data integrity, rules of evidence, presenting findings, proficiency testing, and data archiving.", "title": "" }, { "docid": "333e2df79425177f0ce2686bd5edbfbe", "text": "The current paper proposes a novel variational Bayes predictive coding RNN model, which can learn to generate fluctuated temporal patterns from exemplars. The model learns to maximize the lower bound of the weighted sum of the regularization and reconstruction error terms. We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns. Simulation results show that strong weighting of the reconstruction term causes the development of deterministic chaos for imitating the randomness observed in target sequences, while strong weighting of the regularization term causes the development of stochastic dynamics imitating probabilistic processes observed in targets. Moreover, results indicate that the most generalized learning emerges between these two extremes. The paper concludes with implications in terms of the underlying neuronal mechanisms for autism spectrum disorder and for free action.", "title": "" }, { "docid": "67f716403b420fcd14c057dcf3be97e3", "text": "In this paper, the answer selection problem in community question answering (CQA) is regarded as an answer sequence labeling task, and a novel approach is proposed based on the recurrent architecture for this problem. Our approach applies convolution neural networks (CNNs) to learning the joint representation of questionanswer pair firstly, and then uses the joint representation as input of the long shortterm memory (LSTM) to learn the answer sequence of a question for labeling the matching quality of each answer. Experiments conducted on the SemEval 2015 CQA dataset shows the effectiveness of our approach.", "title": "" }, { "docid": "f4009fde2b4ac644d3b83b664e178b5f", "text": "This chapter describes the history of metaheuristics in five distinct periods, starting long before the first use of the term and ending a long time in the future.", "title": "" } ]
scidocsrr
6d361e9a6a69488d5b91836e8447a3e7
Surgical Simulation Training Systems : Box Trainers , Virtual Reality and Augmented Reality Simulators
[ { "docid": "b60e8a6f417d70499c7a6a251406c280", "text": "Details are presented of a low cost augmented-reality system for the simulation of ultrasound guided needle insertion procedures (tissue biopsy, abscess drainage, nephrostomy etc.) for interventional radiology education and training. The system comprises physical elements; a mannequin, a mock ultrasound probe and a needle, and software elements; generating virtual ultrasound anatomy and allowing data collection. These two elements are linked by a pair of magnetic 3D position sensors. Virtual anatomic images are generated based on anatomic data derived from full body CT scans of live humans. Details of the novel aspects of this system are presented including; image generation, registration and calibration.", "title": "" } ]
[ { "docid": "4df7b0aa29f11ba58eeb7265c195b75b", "text": "Cheilitis Glandularis is an uncommon and rare inflammatory salivary gland disorder affecting the lower lip of adults with various etiological and predisposing factors. Based on clinical and histopathological findings three types of cheilitis glandularis have been described in the literature (i.e. simple, superficial suppurative and Deep Suppurative). The simple type being the most common but malignant changes have been observed in Deep Suppurative type. We report a case of cheilitis glandularis affecting lower lip in 20 years old female which is rare in that age and sex.", "title": "" }, { "docid": "0186c053103d06a8ddd054c3c05c021b", "text": "The brain-gut axis is a bidirectional communication system between the central nervous system and the gastrointestinal tract. Serotonin functions as a key neurotransmitter at both terminals of this network. Accumulating evidence points to a critical role for the gut microbiome in regulating normal functioning of this axis. In particular, it is becoming clear that the microbial influence on tryptophan metabolism and the serotonergic system may be an important node in such regulation. There is also substantial overlap between behaviours influenced by the gut microbiota and those which rely on intact serotonergic neurotransmission. The developing serotonergic system may be vulnerable to differential microbial colonisation patterns prior to the emergence of a stable adult-like gut microbiota. At the other extreme of life, the decreased diversity and stability of the gut microbiota may dictate serotonin-related health problems in the elderly. The mechanisms underpinning this crosstalk require further elaboration but may be related to the ability of the gut microbiota to control host tryptophan metabolism along the kynurenine pathway, thereby simultaneously reducing the fraction available for serotonin synthesis and increasing the production of neuroactive metabolites. The enzymes of this pathway are immune and stress-responsive, both systems which buttress the brain-gut axis. In addition, there are neural processes in the gastrointestinal tract which can be influenced by local alterations in serotonin concentrations with subsequent relay of signals along the scaffolding of the brain-gut axis to influence CNS neurotransmission. Therapeutic targeting of the gut microbiota might be a viable treatment strategy for serotonin-related brain-gut axis disorders.", "title": "" }, { "docid": "ef5c44f6895178c8727272dbb74b5df2", "text": "We present a systematic analysis of existing multi-domain learning approaches with respect to two questions. First, many multidomain learning algorithms resemble ensemble learning algorithms. (1) Are multi-domain learning improvements the result of ensemble learning effects? Second, these algorithms are traditionally evaluated in a balanced class label setting, although in practice many multidomain settings have domain-specific class label biases. When multi-domain learning is applied to these settings, (2) are multidomain methods improving because they capture domain-specific class biases? An understanding of these two issues presents a clearer idea about where the field has had success in multi-domain learning, and it suggests some important open questions for improving beyond the current state of the art.", "title": "" }, { "docid": "7280754ec81098fe38023efcb25871ba", "text": "In this paper, we present a complete framework to inverse render faces with a 3D Morphable Model (3DMM). By decomposing the image formation process into geometric and photometric parts, we are able to state the problem as a multilinear system which can be solved accurately and efficiently. As we treat each contribution as independent, the objective function is convex in the parameters and a global solution is guaranteed. We start by recovering 3D shape using a novel algorithm which incorporates generalization error of the model obtained from empirical measurements. We then describe two methods to recover facial texture, diffuse lighting, specular reflectance, and camera properties from a single image. The methods make increasingly weak assumptions and can be solved in a linear fashion. We evaluate our findings on a publicly available database, where we are able to outperform an existing state-of-the-art algorithm. We demonstrate the usability of the recovered parameters in a recognition experiment conducted on the CMU-PIE database.", "title": "" }, { "docid": "fd108d142963d10968904708555efc9d", "text": "The Gaussian filter has been used extensively in image processing and computer vision for many years. In this survey paper, we discuss the various features of this operator that make it the filter of choice in the area of edge detection. Despite these desirable features of the Gaussian filter, edge detection algorithms which use it suffer from many problems. We will review several linear and nonlinear Gaussian-based edge detection methods.", "title": "" }, { "docid": "3f988178611f2d6f13d6fd72febf1542", "text": "In today’s information-based society, there is abundant knowledge out there carried in the form of natural language texts (e.g., news articles, social media posts, scientific publications), which spans across various domains (e.g., corporate documents, advertisements, legal acts, medical reports), and grows at an astonishing rate. How to turn such massive and unstructured text data into structured, actionable knowledge for computational machines, and furthermore, how to teach machines learn to reason and complete the extracted knowledge is a grand challenge to the research community. Traditional IE systems assume abundant human annotations for training high quality machine learning models, which is impractical when trying to deploy IE systems to a broad range of domains, settings and languages. In the first part of the tutorial, we introduce how to extract structured facts (i.e., entities and their relations of different types) from text corpora to construct knowledge bases, with a focus on methods that are minimally-supervised and domain-independent for timely knowledge base construction across various application domains. In the second part, we introduce how to leverage other knowledge, such as the distributional statistics of characters and words, the annotations for other tasks and other domains, and the linguistics and problem structures, to combat the problem of inadequate supervision, and conduct low-resource information extraction. In the third part, we describe recent advances in knowledge base reasoning. We start with the gentle introduction to the literature, focusing on pathbased and embedding based methods. We then describe DeepPath, a recent attempt of using deep reinforcement learning to combine the best of both worlds for knowledge base reasoning.", "title": "" }, { "docid": "15c715c3da3883e363aa8e442e903269", "text": "A supervised learning rule for Spiking Neural Networks (SNNs) is presented that can cope with neurons that spike multiple times. The rule is developed by extending the existing SpikeProp algorithm which could only be used for one spike per neuron. The problem caused by the discontinuity in the spike process is counteracted with a simple but effective rule, which makes the learning process more efficient. Our learning rule is successfully tested on a classification task of Poisson spike trains. We also applied the algorithm on a temporal version of the XOR problem and show that it is possible to learn this classical problem using only one spiking neuron making use of a hairtrigger situation.", "title": "" }, { "docid": "f59096137378d49c81bcb1de0be832b2", "text": "Here the transformation related to the fast Fourier strategy mainly used in the field oriented well effective operations of the strategy elated to the scenario of the design oriented fashion in its implementation related to the well efficient strategy of the processing of the signal in the digital domain plays a crucial role in its analysis point of view in well oriented fashion respectively. It can also be applicable for the processing of the images and there is a crucial in its analysis in terms of the pixel wise process takes place in the system in well effective manner respectively. There is a vast number of the applications oriented strategy takes place in the system in w ell effective manner in the system based implementation followed by the well efficient analysis point of view in well stipulated fashion of the transformation related to the fast Fourier strategy plays a crucial role and some of them includes analysis of the signal, Filtering of the sound and also the compression of the data equations of the partial differential strategy plays a major role and the responsibility in its implementation scenario in a well oriented fashion respectively. There is a huge amount of the efficient analysis of the system related to the strategy of the transformation of the fast Fourier environment plays a crucial role and the responsibility for the effective implementation of the DFT in well respective fashion. Here in the present system oriented strategy DFT implementation takes place in a well explicit manner followed by the well effective analysis of the system where domain related to the time based strategy of the decimation plays a crucial role in its implementation aspect in well effective fashion respectively. Experiments have been conducted on the present method where there is a lot of analysis takes place on the large number of the huge datasets in a well oriented fashion with respect to the different environmental strategy and there is an implementation of the system in a well effective manner in terms of the improvement in the performance followed by the outcome of the entire system in well oriented fashion respectively.", "title": "" }, { "docid": "7b5797c3cc861f02467684ed72201a4b", "text": "Interspecies blastocyst complementation enables organ-specific enrichment of xenogenic pluripotent stem cell (PSC) derivatives. Here, we establish a versatile blastocyst complementation platform based on CRISPR-Cas9-mediated zygote genome editing and show enrichment of rat PSC-derivatives in several tissues of gene-edited organogenesis-disabled mice. Besides gaining insights into species evolution, embryogenesis, and human disease, interspecies blastocyst complementation might allow human organ generation in animals whose organ size, anatomy, and physiology are closer to humans. To date, however, whether human PSCs (hPSCs) can contribute to chimera formation in non-rodent species remains unknown. We systematically evaluate the chimeric competency of several types of hPSCs using a more diversified clade of mammals, the ungulates. We find that naïve hPSCs robustly engraft in both pig and cattle pre-implantation blastocysts but show limited contribution to post-implantation pig embryos. Instead, an intermediate hPSC type exhibits higher degree of chimerism and is able to generate differentiated progenies in post-implantation pig embryos.", "title": "" }, { "docid": "6aebae4d8ed0af23a38a945b85c3b6ff", "text": "Modern web applications are conglomerations of JavaScript written by multiple authors: application developers routinely incorporate code from third-party libraries, and mashup applications synthesize data and code hosted at different sites. In current browsers, a web application’s developer and user must trust third-party code in libraries not to leak the user’s sensitive information from within applications. Even worse, in the status quo, the only way to implement some mashups is for the user to give her login credentials for one site to the operator of another site. Fundamentally, today’s browser security model trades privacy for flexibility because it lacks a sufficient mechanism for confining untrusted code. We present COWL, a robust JavaScript confinement system for modern web browsers. COWL introduces label-based mandatory access control to browsing contexts in a way that is fully backwardcompatible with legacy web content. We use a series of case-study applications to motivate COWL’s design and demonstrate how COWL allows both the inclusion of untrusted scripts in applications and the building of mashups that combine sensitive information from multiple mutually distrusting origins, all while protecting users’ privacy. Measurements of two COWL implementations, one in Firefox and one in Chromium, demonstrate a virtually imperceptible increase in page-load latency.", "title": "" }, { "docid": "1cee66a4630522e2128ca6b0cd2b87e4", "text": "This paper gives a general definition of a “kind of schema” (often called a “meta-model” in the literature, but here called a “species”) along with general definitions for the schemas of a species, and for the databases, constraints, and queries over a given schema of a species. This leads naturally to a general theory of data translation and integration over arbitrary schemas of arbitrary species, based on schema morphisms, and to a similar general theory of ontology translation and integration over arbitrary logics. Institutions provide a general notion of logic, and Grothendieck flattening provides a general tool for integrating heterogeneous schemas, species and logics, as well as theories, such as ontologies, over different logics. Many examples of our novel concepts are included, some rather detailed. An initial section introduces data integration and ontologies for readers who are not specialists, with some emphasis on challenges. A brief review of universal algebra is also given, though some familiarity with category theory is assumed in later sections.", "title": "" }, { "docid": "3b5dcd12c1074100ffede33c8b3a680c", "text": "This paper proposes a two-stream flow-guided convolutional attention networks for action recognition in videos. The central idea is that optical flows, when properly compensated for the camera motion, can be used to guide attention to the human foreground. We thus develop crosslink layers from the temporal network (trained on flows) to the spatial network (trained on RGB frames). These crosslink layers guide the spatial-stream to pay more attention to the human foreground areas and be less affected by background clutter. We obtain promising performances with our approach on the UCF101, HMDB51 and Hollywood2 datasets.", "title": "" }, { "docid": "737814d99e2c3ef09a0f17bf143a40df", "text": "This paper is based on cyclic redundancy check based encoding scheme. High throughput and high speed hardware for Golay code encoder and decoder could be useful in digital communication system. In this paper, a new algorithm has been proposed for CRC based encoding scheme, which devoid of any linear feedback shift registers (LFSR). In addition, efficient architectures have been proposed for both Golay encoder and decoder, which outperform the existing architectures in terms of speed and throughput. The proposed architecture implemented in virtex-4 Xilinx power estimator. Although the CRC encoder and decoder is intuitive and easy to implement, and to reduce the huge hardware complexity required. The proposed method it improve the transmission system performance level. In this architecture our work is to design a Golay code based on encoder and decoder architecture using CRC generation technique. This technique is used to reduce the circuit complexity for data transmission and reception process.", "title": "" }, { "docid": "6586fc02e554e58ee1d5a58ef90cc197", "text": "OBJECTIVES\nRecent studies have started to explore the implementation of brain-computer interfaces (BCI) as part of driving assistant systems. The current study presents an EEG-based BCI that decodes error-related brain activity. Such information can be used, e.g., to predict driver's intended turning direction before reaching road intersections.\n\n\nAPPROACH\nWe executed experiments in a car simulator (N = 22) and a real car (N = 8). While subject was driving, a directional cue was shown before reaching an intersection, and we classified the presence or not of an error-related potentials from EEG to infer whether the cued direction coincided with the subject's intention. In this protocol, the directional cue can correspond to an estimation of the driving direction provided by a driving assistance system. We analyzed ERPs elicited during normal driving and evaluated the classification performance in both offline and online tests.\n\n\nRESULTS\nAn average classification accuracy of 0.698 ± 0.065 was obtained in offline experiments in the car simulator, while tests in the real car yielded a performance of 0.682 ± 0.059. The results were significantly higher than chance level for all cases. Online experiments led to equivalent performances in both simulated and real car driving experiments. These results support the feasibility of decoding these signals to help estimating whether the driver's intention coincides with the advice provided by the driving assistant in a real car.\n\n\nSIGNIFICANCE\nThe study demonstrates a BCI system in real-world driving, extending the work from previous simulated studies. As far as we know, this is the first online study in real car decoding driver's error-related brain activity. Given the encouraging results, the paradigm could be further improved by using more sophisticated machine learning approaches and possibly be combined with applications in intelligent vehicles.", "title": "" }, { "docid": "fe94febc520eab11318b49391d46476b", "text": "BACKGROUND\nDiabetes is a chronic disease, with high prevalence across many nations, which is characterized by elevated level of blood glucose and risk of acute and chronic complication. The Kingdom of Saudi Arabia (KSA) has one of the highest levels of diabetes prevalence globally. It is well-known that the treatment of diabetes is complex process and requires both lifestyle change and clear pharmacologic treatment plan. To avoid the complication from diabetes, the effective behavioural change and extensive education and self-management is one of the key approaches to alleviate such complications. However, this process is lengthy and expensive. The recent studies on the user of smart phone technologies for diabetes self-management have proven to be an effective tool in controlling hemoglobin (HbA1c) levels especially in type-2 diabetic (T2D) patients. However, to date no reported study addressed the effectiveness of this approach in the in Saudi patients. This study investigates the impact of using mobile health technologies for the self-management of diabetes in Saudi Arabia.\n\n\nMETHODS\nIn this study, an intelligent mobile diabetes management system (SAED), tailored for T2D patients in KSA was developed. A pilot study of the SAED system was conducted in Saudi Arabia with 20 diabetic patients for 6 months duration. The patients were randomly categorized into a control group who did not use the SAED system and an intervention group whom used the SAED system for their diabetes management during this period. At the end of the follow-up period, the HbA1c levels in the patients in both groups were measure together with a diabetes knowledge test was also conducted to test the diabetes awareness of the patients.\n\n\nRESULTS\nThe results of SAED pilot study showed that the patients in the intervention group were able to significantly decrease their HbA1c levels compared to the control group. The SAED system also enhanced the diabetes awareness amongst the patients in the intervention group during the trial period. These outcomes confirm the global studies on the effectiveness of smart phone technologies in diabetes management. The significance of the study is that this was one of the first such studies conducted on Saudi patients and of their acceptance for such technology in their diabetes self-management treatment plans.\n\n\nCONCLUSIONS\nThe pilot study of the SAED system showed that a mobile health technology can significantly improve the HbA1C levels among Saudi diabetic and improve their disease management plans. The SAED system can also be an effective and low-cost solution in improving the quality of life of diabetic patients in the Kingdom considering the high level of prevalence and the increasing economic burden of this disease.", "title": "" }, { "docid": "8858053a805375aba9d8e71acfd7b826", "text": "With the accelerating rate of globalization, business exchanges are carried out cross the border, as a result there is a growing demand for talents professional both in English and Business. We can see that at present Business English courses are offered by many language schools in the aim of meeting the need for Business English talent. Many researchers argue that no differences can be defined between Business English teaching and General English teaching. However, this paper concludes that Business English is different from General English at least in such aspects as in the role of teacher, in course design, in teaching models, etc., thus different teaching methods should be applied in order to realize expected teaching goals.", "title": "" }, { "docid": "1cac08a96e946fb6d98290aa8bb6c434", "text": "Accelerated in vitro release testing methodology has been developed as an indicator of product performance to be used as a discriminatory quality control (QC) technique for the release of clinical and commercial batches of biodegradable microspheres. While product performance of biodegradable microspheres can be verified by in vivo and/or in vitro experiments, such evaluation can be particularly challenging because of slow polymer degradation, resulting in extended study times, labor, and expense. Three batches of Leuprolide poly(lactic-co-glycolic acid) (PLGA) microspheres having varying morphology (process variants having different particle size and specific surface area) were manufactured by the solvent extraction/evaporation technique. Tests involving in vitro release, polymer degradation and hydration of the microspheres were performed on the three batches at 55°C. In vitro peptide release at 55°C was analyzed using a previously derived modification of the Weibull function termed the modified Weibull equation (MWE). Experimental observations and data analysis confirm excellent reproducibility studies within and between batches of the microsphere formulations demonstrating the predictability of the accelerated experiments at 55°C. The accelerated test method was also successfully able to distinguish the in vitro product performance between the three batches having varying morphology (process variants), indicating that it is a suitable QC tool to discriminate product or process variants in clinical or commercial batches of microspheres. Additionally, data analysis utilized the MWE to further quantify the differences obtained from the accelerated in vitro product performance test between process variants, thereby enhancing the discriminatory power of the accelerated methodology at 55°C.", "title": "" }, { "docid": "0dccd34fa0bfd4a9841610bf67b6ae81", "text": "Broadcast authentication is a fundamental security service in distributed sensor networks. This paper presents the development of a scalable broadcast authentication scheme named multi-level μTESLA based on μTESLA, a broadcast authentication protocol whose scalability is limited by its unicast-based initial parameter distribution. Multi-level μTESLA satisfies several nice properties, including low overhead, tolerance of message loss, scalability to large networks, and resistance to replay attacks as well as denial of service attacks. This paper also presents the experimental results obtained through simulation, which demonstrate the performance of the proposed scheme under severe denial of service attacks and poor channel quality.", "title": "" }, { "docid": "b5347e195b44d5ae6d4674c685398fa3", "text": "The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensional image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory.", "title": "" }, { "docid": "6ddb475ef1529ab496ab9f40dc51cb99", "text": "While inexpensive depth sensors are becoming increasingly ubiquitous, field of view and self-occlusion constraints limit the information a single sensor can provide. For many applications one may instead require a network of depth sensors, registered to a common world frame and synchronized in time. Historically such a setup has required a tedious manual calibration procedure, making it infeasible to deploy these networks in the wild, where spatial and temporal drift are common. In this work, we propose an entirely unsupervised procedure for calibrating the relative pose and time offsets of a pair of depth sensors. So doing, we make no use of an explicit calibration target, or any intentional activity on the part of a user. Rather, we use the unstructured motion of objects in the scene to find potential correspondences between the sensor pair. This yields a rough transform which is then refined with an occlusion-aware energy minimization. We compare our results against the standard checkerboard technique, and provide qualitative examples for scenes in which such a technique would be impossible.", "title": "" } ]
scidocsrr
ec50e16eaaf87c047ad65adbe589012f
To buy or not to buy: mining airfare data to minimize ticket purchase price
[ { "docid": "38aeacd5d85523b494010debd69f4bac", "text": "We propose to train trading systems by optimizing financial objective functions via reinforcement learning. The performance functions that we consider as value functions are profit or wealth, the Sharpe ratio and our recently proposed ifferential Sharpe ratio for online learning. In Moody & Wu (1997), we presented empirical results in controlled experiments that demonstrated the advantages of reinforcement learning relative to supervised learning. Here we extend our previous work to compare Q-Learning to a reinforcement learning technique based on real-time recurrent learning (RTRL) that maximizes immediate reward. Our simulation results include a spectacular demonstration of the presence of predictability in the monthly Standard and Poors 500 stock index for the 25 year period 1970 through 1994. Our reinforcement trader achieves a simulated out-of-sample profit of over 4000% for this period, compared to the return for a buy and hold strategy of about 1300% (with dividends reinvested). This superior result is achieved with substantially lower isk.", "title": "" } ]
[ { "docid": "36f960b37e7478d8ce9d41d61195f83a", "text": "An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives au explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, sphericat-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than sphericalinterpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamformmg and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method.", "title": "" }, { "docid": "b835d745ea5d158b9de418a0c009dcdf", "text": "This paper presents a new grid connected photovoltaic energy conversion system configuration for large scale power plants. The grid-tied converter is based on a modular multilevel converter using voltage source H-bridge cells. The proposed converter is capable of concentrating a multimegawatt PV plant with distributed string MPPT tracking capability, high power quality and increased efficiency compared to the classic two-level voltage source converters. The main challenge is to handle the inherent power unbalances which may occur, not only between the different cells of one phase of the converter, but also between the three phases. The control strategy to deal with these unbalances is analyzed in this paper. Simulation results for a downsized 7 level MMC composed of 18 H-bridge cells and PV strings are presented to validate the proposed topology and control method.", "title": "" }, { "docid": "b9148f25ba143660cf38035425443ee9", "text": "Humans tend to swing their arms when they walk, a curious behaviour since the arms play no obvious role in bipedal gait. It might be costly to use muscles to swing the arms, and it is unclear whether potential benefits elsewhere in the body would justify such costs. To examine these costs and benefits, we developed a passive dynamic walking model with free-swinging arms. Even with no torques driving the arms or legs, the model produced walking gaits with arm swinging similar to humans. Passive gaits with arm phasing opposite to normal were also found, but these induced a much greater reaction moment from the ground, which could require muscular effort in humans. We therefore hypothesized that the reduction of this moment may explain the physiological benefit of arm swinging. Experimental measurements of humans (n = 10) showed that normal arm swinging required minimal shoulder torque, while volitionally holding the arms still required 12 per cent more metabolic energy. Among measures of gait mechanics, vertical ground reaction moment was most affected by arm swinging and increased by 63 per cent without it. Walking with opposite-to-normal arm phasing required minimal shoulder effort but magnified the ground reaction moment, causing metabolic rate to increase by 26 per cent. Passive dynamics appear to make arm swinging easy, while indirect benefits from reduced vertical moments make it worthwhile overall.", "title": "" }, { "docid": "a2e192b3b17b261e525ed7abc3543d26", "text": "A new version of a special-purpose processor for running lazy functional programs is presented. This processor – the Reduceron – exploits parallel memories and dynamic analyses to increase evaluation speed, and is implemented using reconfigurable hardware. Compared to a more conventional functional language implementation targeting a standard RISC processor running on the same reconfigurable hardware, the Reduceron offers a significant improvement in run-time performance.", "title": "" }, { "docid": "98788b45932c8564d29615f49407d179", "text": "BACKGROUND\nAbnormal forms of grief, currently referred to as complicated grief or prolonged grief disorder, have been discussed extensively in recent years. While the diagnostic criteria are still debated, there is no doubt that prolonged grief is disabling and may require treatment. To date, few interventions have demonstrated efficacy.\n\n\nMETHODS\nWe investigated whether outpatients suffering from prolonged grief disorder (PGD) benefit from a newly developed integrative cognitive behavioural therapy for prolonged grief (PG-CBT). A total of 51 patients were randomized into two groups, stratified by the type of death and their relationship to the deceased; 24 patients composed the treatment group and 27 patients composed the wait list control group (WG). Treatment consisted of 20-25 sessions. Main outcome was change in grief severity; secondary outcomes were reductions in general psychological distress and in comorbidity.\n\n\nRESULTS\nPatients on average had 2.5 comorbid diagnoses in addition to PGD. Between group effect sizes were large for the improvement of grief symptoms in treatment completers (Cohen׳s d=1.61) and in the intent-to-treat analysis (d=1.32). Comorbid depressive symptoms also improved in PG-CBT compared to WG. The completion rate was 79% in PG-CBT and 89% in WG.\n\n\nLIMITATIONS\nThe major limitations of this study were a small sample size and that PG-CBT took longer than the waiting time.\n\n\nCONCLUSIONS\nPG-CBT was found to be effective with an acceptable dropout rate. Given the number of bereaved people who suffer from PGD, the results are of high clinical relevance.", "title": "" }, { "docid": "07310c30b78d74a1e237af4dd949d68e", "text": "The vulnerability of face, fingerprint and iris recognition systems to attacks based on morphed biometric samples has been established in the recent past. However, so far a reliable detection of morphed biometric samples has remained an unsolved research challenge. In this work, we propose the first multi-algorithm fusion approach to detect morphed facial images. The FRGCv2 face database is used to create a set of 4,808 morphed and 2,210 bona fide face images which are divided into a training and test set. From a single cropped facial image features are extracted using four types of complementary feature extraction algorithms, including texture descriptors, keypoint extractors, gradient estimators and a deep learning-based method. By performing a score-level fusion of comparison scores obtained by four different types of feature extractors, a detection equal error rate (D-EER) of 2.8% is achieved. Compared to the best single algorithm approach achieving a D-EER of 5.5%, the D-EER of the proposed multi-algorithm fusion system is al- most twice as low, confirming the soundness of the presented approach.", "title": "" }, { "docid": "47866c8eb518f962213e3a2d8c3ab8d3", "text": "With the increasing fears of the impacts of the high penetration rates of Photovoltaic (PV) systems, a technical study about their effects on the power quality metrics of the utility grid is required. Since such study requires a complete modeling of the PV system in an electromagnetic transient software environment, PSCAD was chosen. This paper investigates a grid-tied PV system that is prepared in PSCAD. The model consists of PV array, DC link capacitor, DC-DC buck converter, three phase six-pulse inverter, AC inductive filter, transformer and a utility grid equivalent model. The paper starts with investigating the tasks of the different blocks of the grid-tied PV system model. It also investigates the effect of variable atmospheric conditions (irradiation and temperature) on the performance of the different components in the model. DC-DC converter and inverter in this model use PWM and SPWM switching techniques, respectively. Finally, total harmonic distortion (THD) analysis on the inverter output current at PCC will be applied and the obtained THD values will be compared with the limits specified by the regulating standards such as IEEE Std 519-1992.", "title": "" }, { "docid": "7c47eaa26fb5d661c056cff84b485e99", "text": "The comparison of methods experiment is important part in process of analytical methods and instruments validation. Passing and Bablok regression analysis is a statistical procedure that allows valuable estimation of analytical methods agreement and possible systematic bias between them. It is robust, non-parametric, non sensitive to distribution of errors and data outliers. Assumptions for proper application of Passing and Bablok regression are continuously distributed data and linear relationship between data measured by two analytical methods. Results are presented with scatter diagram and regression line, and regression equation where intercept represents constant and slope proportional measurement error. Confidence intervals of 95% of intercept and slope explain if their value differ from value zero (intercept) and value one (slope) only by chance, allowing conclusion of method agreement and correction action if necessary. Residual plot revealed outliers and identify possible non-linearity. Furthermore, cumulative sum linearity test is performed to investigate possible significant deviation from linearity between two sets of data. Non linear samples are not suitable for concluding on method agreement.", "title": "" }, { "docid": "e59bd7353cdbd4f353e45990a2c24c63", "text": "We describe CACTI-IO, an extension to CACTI [4] that includes power, area and timing models for the IO and PHY of the off-chip memory interface for various server and mobile configurations. CACTI-IO enables design space exploration of the off-chip IO along with the DRAM and cache parameters. We describe the models added and three case studies that use CACTI-IO to study the tradeoffs between memory capacity, bandwidth and power.\n The case studies show that CACTI-IO helps (i) provide IO power numbers that can be fed into a system simulator for accurate power calculations, (ii) optimize off-chip configurations including the bus width, number of ranks, memory data width and off-chip bus frequency, especially for novel buffer-based topologies, and (iii) enable architects to quickly explore new interconnect technologies, including 3-D interconnect. We find that buffers on board and 3-D technologies offer an attractive design space involving power, bandwidth and capacity when appropriate interconnect parameters are deployed.", "title": "" }, { "docid": "b2c03d8e54a2a6840f6688ab9682e24b", "text": "Path following and follow-the-leader motion is particularly desirable for minimally-invasive surgery in confined spaces which can only be reached using tortuous paths, e.g. through natural orifices. While path following and followthe- leader motion can be achieved by hyper-redundant snake robots, their size is usually not applicable for medical applications. Continuum robots, such as tendon-driven or concentric tube mechanisms, fulfill the size requirements for minimally invasive surgery, but yet follow-the-leader motion is not inherently provided. In fact, parameters of the manipulator's section curvatures and translation have to be chosen wisely a priori. In this paper, we consider a tendon-driven continuum robot with extensible sections. After reformulating the forward kinematics model, we formulate prerequisites for follow-the-leader motion and present a general approach to determine a sequence of robot configurations to achieve follow-the-leader motion along a given 3D path. We evaluate our approach in a series of simulations with 3D paths composed of constant curvature arcs and general 3D paths described by B-spline curves. Our results show that mean path errors <;0.4mm and mean tip errors <;1.6mm can theoretically be achieved for constant curvature paths and <;2mm and <;3.1mm for general B-spline curves respectively.", "title": "" }, { "docid": "0b5f0cd5b8d49d57324a0199b4925490", "text": "Deep brain stimulation (DBS) has an increasing role in the treatment of idiopathic Parkinson's disease. Although, the subthalamic nucleus (STN) is the commonly chosen target, a number of groups have reported that the most effective contact lies dorsal/dorsomedial to the STN (region of the pallidofugal fibres and the rostral zona incerta) or at the junction between the dorsal border of the STN and the latter. We analysed our outcome data from Parkinson's disease patients treated with DBS between April 2002 and June 2004. During this period we moved our target from the STN to the region dorsomedial/medial to it and subsequently targeted the caudal part of the zona incerta nucleus (cZI). We present a comparison of the motor outcomes between these three groups of patients with optimal contacts within the STN (group 1), dorsomedial/medial to the STN (group 2) and in the cZI nucleus (group 3). Thirty-five patients with Parkinson's disease underwent MRI directed implantation of 64 DBS leads into the STN (17), dorsomedial/medial to STN (20) and cZI (27). The primary outcome measure was the contralateral Unified Parkinson's Disease Rating Scale (UPDRS) motor score (off medication/off stimulation versus off medication/on stimulation) measured at follow-up (median time 6 months). The secondary outcome measures were the UPDRS III subscores of tremor, bradykinesia and rigidity. Dyskinesia score, L-dopa medication reduction and stimulation parameters were also recorded. The mean adjusted contralateral UPDRS III score with cZI stimulation was 3.1 (76% reduction) compared to 4.9 (61% reduction) in group 2 and 5.7 (55% reduction) in the STN (P-value for trend <0.001). There was a 93% improvement in tremor with cZI stimulation versus 86% in group 2 versus 61% in group 1 (P-value = 0.01). Adjusted 'off-on' rigidity scores were 1.0 for the cZI group (76% reduction), 2.0 for group 2 (52% reduction) and 2.1 for group 1 (50% reduction) (P-value for trend = 0.002). Bradykinesia was more markedly improved in the cZI group (65%) compared to group 2 (56%) or STN group (59%) (P-value for trend = 0.17). There were no statistically significant differences in the dyskinesia scores, L-dopa medication reduction and stimulation parameters between the three groups. Stimulation related complications were seen in some group 2 patients. High frequency stimulation of the cZI results in greater improvement in contralateral motor scores in Parkinson's disease patients than stimulation of the STN. We discuss the implications of this finding and the potential role played by the ZI in Parkinson's disease.", "title": "" }, { "docid": "080a14f6eb96b04c11c0cb65897dadd2", "text": "Enterococcus faecalis is a microorganism commonly detected in asymptomatic, persistent endodontic infections. Its prevalence in such infections ranges from 24% to 77%. This finding can be explained by various survival and virulence factors possessed by E. faecalis, including its ability to compete with other microorganisms, invade dentinal tubules, and resist nutritional deprivation. Use of good aseptic technique, increased apical preparation sizes, and inclusion of 2% chlorhexidine in combination with sodium hypochlorite are currently the most effective methods to combat E. faecalis within the root canal systems of teeth. In the changing face of dental care, continued research on E. faecalis and its elimination from the dental apparatus may well define the future of the endodontic specialty.", "title": "" }, { "docid": "56e5ba4f289816ab9ebdea2c71375258", "text": "This paper proposes a new scheme for fault detection and isolation (FDI) in variable speed wind turbine. The proposed scheme is based on an intelligent data-driven fault detection scheme using the extreme learning machine approach (ELM). The ELM is a kind of single hidden layer feed-forward neural network (SLFNN) with a fast learning. The basic idea is the use of a certain number n of ELM classifiers to deals with n types of faults affecting the wind turbine. Different parts of the process were investigated including actuators and sensors faults. The effectiveness of the proposed approach is illustrated through simulation.", "title": "" }, { "docid": "1eef21abdf14dc430b333cac71d4fe07", "text": "The authors have developed an adaptive matched filtering algorithm based upon an artificial neural network (ANN) for QRS detection. They use an ANN adaptive whitening filter to model the lower frequencies of the electrocardiogram (ECG) which are inherently nonlinear and nonstationary. The residual signal which contains mostly higher frequency QRS complex energy is then passed through a linear matched filter to detect the location of the QRS complex. The authors developed an algorithm to adaptively update the matched filter template from the detected QRS complex in the ECG signal itself so that the template can be customized to an individual subject. This ANN whitening filter is very effective at removing the time-varying, nonlinear noise characteristic of ECG signals. The detection rate for a very noisy patient record in the MIT/BIH arrhythmia database is 99.5% with this approach, which compares favorably to the 97.5% obtained using a linear adaptive whitening filter and the 96.5% achieved with a bandpass filtering method.<<ETX>>", "title": "" }, { "docid": "b8e921733ef4ab77abcb48b0a1f04dbb", "text": "This paper demonstrates the efficiency of kinematic redundancy used to increase the useable workspace of planar parallel mechanisms. As examples, we propose kinematically redundant schemes of the well known planar 3RRR and 3RPR mechanisms denoted as 3(P)RRR and 3(P)RPR. In both cases, a prismatic actuator is added allowing a usually fixed base joint to move linearly. Hence, reconfigurations can be performed selectively in order to avoid singularities and to affect the mechanisms' performance directly. Using an interval-based method the useable workspace, i.e. the singularity-free workspace guaranteeing a desired performance, is obtained. Due to the interval analysis any uncertainties can be implemented within the algorithm leading to practical and realistic results. It is shown that due to the additional prismatic actuator the useable workspace increases significantly. Several analysis examples clarify the efficiency of the proposed kinematically redundant mechanisms.", "title": "" }, { "docid": "ccf105c61316ec4964955f2553bdba9f", "text": "Mobile-cloud offloading mechanisms delegate heavy mobile computation to the cloud. In real life use, the energy tradeoff of computing the task locally or sending the input data and the code of the task to the cloud is often negative, especially with popular communication intensive jobs like social-networking, gaming, and emailing. We design and build a working implementation of CDroid, a system that tightly couples the device OS to its cloud counterpart. The cloud-side handles data traffic through the device efficiently and, at the same time, caches code and data optimally for possible future offloading. In our system, when offloading decision takes place, input and code are likely to be already on the cloud. CDroid makes mobile cloud offloading more practical enabling offloading of lightweight jobs and communication intensive apps. Our experiments with real users in everyday life show excellent results in terms of energy savings and user experience.", "title": "" }, { "docid": "562b8653722f9b2cce55a400ad415286", "text": "India is a land of different weather conditions and versatile soils. Every year Indian farmers are facing the problem of sudden rain in their areas without any correct weather forecast which leads to damage of the already grown crops. The second major problem pertaining to Indian farmers is the lack of sufficient knowledge about their soil. The soil forecasting of how the soil structure is changing day by day due to different weather condition and other external factors, and which crop will be optimally suited to be grown in such soil are some of the problems common to the farmers. This paper makes an attempt as an assessment in proposing the solution and at the same time develops a prototype of a device using IoT for the use of the farmers on Indian agricultural land. The solution proposed will have a centralized data server to analyze the data and report to the farmer the precautionary steps to be taken in advance for the safety of the crops. The solution proposed will have eco-friendly energy management through the solar plant and wind energy which make the IoT device more portable and at the same time makes implementable in any rural areas of India.", "title": "" }, { "docid": "6bd3614d830cbef03c9567bf096e417a", "text": "Rehabilitation robots start to become an important tool in stroke rehabilitation. Compared to manual arm training, robot-supported training can be more intensive, of longer duration, repetitive and task-oriented. Therefore, these devices have the potential to improve the rehabilitation process in stroke patients. While in the past, most groups have been working with endeffector-based robots, exoskeleton robots become more and more important, mainly because they offer a better guidance of the single human joints, especially during movements with large ranges. Regarding the upper extremities, the shoulder is the most complex human joint and its actuation is, therefore, challenging. This paper deals with shoulder actuation principles for exoskeleton robots. First, a quantitative analysis of the human shoulder movement is presented. Based on that analysis two shoulder actuation principles that provide motion of the center of the glenohumeral joint are presented and evaluated.", "title": "" }, { "docid": "8822138c493df786296c02315bea5802", "text": "Photodefinable Polyimides (PI) and polybenz-oxazoles (PBO) which have been widely used for various electronic applications such as buffer coating, interlayer dielectric and protection layer usually need high temperature cure condition over 300 °C to complete the cyclization and achieve good film properties. In addition, PI and PBO are also utilized recently for re-distribution layer of wafer level package. In this application, lower temperature curability is strongly required in order to prevent the thermal damage of the semi-conductor device and the other packaging material. Then, to meet this requirement, we focused on pre-cyclized polyimide with phenolic hydroxyl groups since this polymer showed the good solubility to aqueous TMAH and there was no need to apply high temperature cure condition. As a result of our study, the positive-tone photodefinable material could be obtained by using DNQ and combination of epoxy cross-linker enabled to enhance the chemical and PCT resistance of the cured film made even at 170 °C. Furthermore, the adhesion to copper was improved probably due to secondary hydroxyl groups which were generated from reacted epoxide groups. In this report, we introduce our concept of novel photodefinable positive-tone polyimide for low temperature cure.", "title": "" }, { "docid": "66a0c31ee0722ad9fc67bad142de1fb0", "text": "One of the key challenges facing wireless sensor networks (WSNs) is extending network lifetime due to sensor nodes having limited power supplies. Extending WSN lifetime is complicated because nodes often experience differential power consumption. For example, nodes closer to the sink in a given routing topology transmit more data and thus consume power more rapidly than nodes farther from the sink. Inspired by the huddling behavior of emperor penguins where the penguins take turns on the cold extremities of a penguin “huddle”, we propose mobile node rotation, a new method for using low-cost mobile sensor nodes to address differential power consumption and extend WSN lifetime. Specifically, we propose to rotate the nodes through the high power consumption locations. We propose efficient algorithms for single and multiple rounds of rotations. Our extensive simulations show that mobile node rotation can extend WSN topology lifetime by more than eight times on average which is significantly better than existing alternatives.", "title": "" } ]
scidocsrr
6d2253859b398d9c00a370a10562dc77
Exploring factors that influence Muslim intention to purchase online
[ { "docid": "66ad4513ed36329c299792ce35b2b299", "text": "Reducing social uncertainty—understanding, predicting, and controlling the behavior of other people—is a central motivating force of human behavior. When rules and customs are not su4cient, people rely on trust and familiarity as primary mechanisms to reduce social uncertainty. The relative paucity of regulations and customs on the Internet makes consumer familiarity and trust especially important in the case of e-Commerce. Yet the lack of an interpersonal exchange and the one-time nature of the typical business transaction on the Internet make this kind of consumer trust unique, because trust relates to other people and is nourished through interactions with them. This study validates a four-dimensional scale of trust in the context of e-Products and revalidates it in the context of e-Services. The study then shows the in:uence of social presence on these dimensions of this trust, especially benevolence, and its ultimate contribution to online purchase intentions. ? 2004 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "e9e7cb42ed686ace9e9785fafd3c72f8", "text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).", "title": "" }, { "docid": "1ce49c421d0a5594ce1c439544500243", "text": "The use of digital games in education is growing. Digital games with their elements of ‘play’ and ‘challenge’ are increasingly viewed as a successful medium for engaging and motivating students, in situations where students may be uninterested or distant. One such situation is mathematics education in Nigeria where young people in schools can be unenthusiastic about the subject. The introduction of digital educational games is being trialed to see if it can address this issue. A key element for ensuring the success of the introduction of new technologies is that the users are prepared and ready to accept the technology. This also applies to the introduction of digital educational games in the classroom. Technology Acceptance Models (TAMs) have been widely employed to explore users' attitudes to technology and to highlight their main concerns and issues. The aim of this study is to investigate if a modified TAM can be successfully developed and deployed to explore teachers' attitudes to the introduction of digital educational games in their classroom. The study employs a mixed methods approach and combines the outcomes from previous research studies with data gathered from interviews with teachers to develop the modified TAM. This approach of combining the results from previous studies together with interviews from the targeted group enabled the key variables/constructs to be identified. Independent evaluation by a group of experts gave further confidence in the model. The results have shown that this modified TAM is a useful instrument for exploring the attitude of teachers to using digital games for learning and teaching, and highlighting the key areas which require support and input to ensure teachers are ready to accept and use this technology in their classroom practice.", "title": "" }, { "docid": "6a1411e0ae6477ad2280dcf941a9fa93", "text": "Measurement of human urinary carcinogen metabolites is a practical approach for obtaining important information about tobacco and cancer. This review presents currently available methods and evaluates their utility. Carcinogens and their metabolites and related compounds that have been quantified in the urine of smokers or non-smokers exposed to environmental tobacco smoke (ETS) include trans,trans-muconic acid (tt-MA) and S-phenylmercapturic acid (metabolites of benzene), 1- and 2-naphthol, hydroxyphenanthrenes and phenanthrene dihydrodiols, 1-hydroxypyrene (1-HOP), metabolites of benzo[a]pyrene, aromatic amines and heterocyclic aromatic amines, N-nitrosoproline, 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol and its glucuronides (NNAL and NNAL-Gluc), 8-oxodeoxyguanosine, thioethers, mercapturic acids, and alkyladenines. Nitrosamines and their metabolites have also been quantified in the urine of smokeless tobacco users. The utility of these assays to provide information about carcinogen dose, delineation of exposed vs. non-exposed individuals, and carcinogen metabolism in humans is discussed. NNAL and NNAL-Gluc are exceptionally useful biomarkers because they are derived from a carcinogen- 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK)- that is specific to tobacco products. The NNAL assay has high sensitivity and specificity, which are particularly important for studies on ETS exposure. Other useful assays that have been widely applied involve quantitation of 1-HOP and tt-MA. Urinary carcinogen metabolite biomarkers will be critical components of future studies on tobacco and human cancer, particularly with respect to new tobacco products and strategies for harm reduction, the role of metabolic polymorphisms in cancer, and further evaluation of human carcinogen exposure from ETS.", "title": "" }, { "docid": "c7993af6bf01f8b35f5494e5a564d757", "text": "Microservice Architectures (MA) have the potential to increase the agility of software development. In an era where businesses require software applications to evolve to support emerging software requirements, particularly for Internet of Things (IoT) applications, we examine the issue of microservice granularity and explore its effect upon application latency. Two approaches to microservice deployment are simulated; the first with microservices in a single container, and the second with microservices partitioned across separate containers. We observed a negligible increase in service latency for the multiple container deployment over a single container.", "title": "" }, { "docid": "d3ce4e666ce658228be23c5a26b87527", "text": "Deep Neural Networks (DNNs) have emerged as a powerful and versatile set of techniques to address challenging artificial intelligence (AI) problems. Applications in domains such as image/video processing, natural language processing, speech synthesis and recognition, genomics and many others have embraced deep learning as the foundational technique. DNNs achieve superior accuracy for these applications using very large models which require 100s of MBs of data storage, ExaOps of computation and high bandwidth for data movement. Despite advances in computing systems, training state-of-the-art DNNs on large datasets takes several days/weeks, directly limiting the pace of innovation and adoption. In this paper, we discuss how these challenges can be addressed via approximate computing. Based on our earlier studies demonstrating that DNNs are resilient to numerical errors from approximate computing, we present techniques to reduce communication overhead of distributed deep learning training via adaptive residual gradient compression (AdaComp), and computation cost for deep learning inference via Prameterized clipping ACTivation (PACT) based network quantization. Experimental evaluation demonstrates order of magnitude savings in communication overhead for training and computational cost for inference while not compromising application accuracy.", "title": "" }, { "docid": "24b8df8f9402c37e685bd4c3156e3464", "text": "We quantify the dynamical implications of the small-world phenomenon by considering the generic synchronization of oscillator networks of arbitrary topology. The linear stability of the synchronous state is linked to an algebraic condition of the Laplacian matrix of the network. Through numerics and analysis, we show how the addition of random shortcuts translates into improved network synchronizability. Applied to networks of low redundancy, the small-world route produces synchronizability more efficiently than standard deterministic graphs, purely random graphs, and ideal constructive schemes. However, the small-world property does not guarantee synchronizability: the synchronization threshold lies within the boundaries, but linked to the end of the small-world region.", "title": "" }, { "docid": "a23fd89da025d456f9fe3e8a47595c6a", "text": "Mobile devices are especially vulnerable nowadays to malware attacks, thanks to the current trend of increased app downloads. Despite the significant security and privacy concerns it received, effective malware detection (MD) remains a significant challenge. This paper tackles this challenge by introducing a streaminglized machine learning-based MD framework, StormDroid: (i) The core of StormDroid is based on machine learning, enhanced with a novel combination of contributed features that we observed over a fairly large collection of data set; and (ii) we streaminglize the whole MD process to support large-scale analysis, yielding an efficient and scalable MD technique that observes app behaviors statically and dynamically. Evaluated on roughly 8,000 applications, our combination of contributed features improves MD accuracy by almost 10% compared with state-of-the-art antivirus systems; in parallel our streaminglized process, StormDroid, further improves efficiency rate by approximately three times than a single thread.", "title": "" }, { "docid": "41f386c9cab08ce2d265ba6522b5c5d5", "text": "Fascioliasis is a zoonosis actually considered as a foodborne trematode disease priority by the World Health Organization. Our study presents three cases of F. hepatica infection diagnosed by direct, indirect and/or imaging diagnostic techniques, showing the need of the combined use of them. In order to overcome some difficulties of the presently available methods we show for the first time the application of molecular tools to improve human fascioliasis diagnosis by employing a PCR protocol based on a repetitive element as target sequence. In conclusion, diagnosis of human fascioliasis has to be carried out by the combination of diagnostic techniques that allow the detection of infection in different disease phases, different epidemiological situations and known/new transmission patterns in the actual scenario.", "title": "" }, { "docid": "68ab3b742b2181a6d2e12ccc9ee46612", "text": "BACKGROUND\nLeadership is important in the implementation of innovation in business, health, and allied health care settings. Yet there is a need for empirically validated organizational interventions for coordinated leadership and organizational development strategies to facilitate effective evidence-based practice (EBP) implementation. This paper describes the initial feasibility, acceptability, and perceived utility of the Leadership and Organizational Change for Implementation (LOCI) intervention. A transdisciplinary team of investigators and community stakeholders worked together to develop and test a leadership and organizational strategy to promote effective leadership for implementing EBPs.\n\n\nMETHODS\nParticipants were 12 mental health service team leaders and their staff (n = 100) from three different agencies that provide mental health services to children and families in California, USA. Supervisors were randomly assigned to the 6-month LOCI intervention or to a two-session leadership webinar control condition provided by a well-known leadership training organization. We utilized mixed methods with quantitative surveys and qualitative data collected via surveys and a focus group with LOCI trainees.\n\n\nRESULTS\nQuantitative and qualitative analyses support the LOCI training and organizational strategy intervention in regard to feasibility, acceptability, and perceived utility, as well as impact on leader and supervisee-rated outcomes.\n\n\nCONCLUSIONS\nThe LOCI leadership and organizational change for implementation intervention is a feasible and acceptable strategy that has utility to improve staff-rated leadership for EBP implementation. Further studies are needed to conduct rigorous tests of the proximal and distal impacts of LOCI on leader behaviors, implementation leadership, organizational context, and implementation outcomes. The results of this study suggest that LOCI may be a viable strategy to support organizations in preparing for the implementation and sustainment of EBP.", "title": "" }, { "docid": "7916a261319dad5f257a0b8e0fa97fec", "text": "INTRODUCTION\nPreliminary research has indicated that recreational ketamine use may be associated with marked cognitive impairments and elevated psychopathological symptoms, although no study to date has determined how these are affected by differing frequencies of use or whether they are reversible on cessation of use. In this study we aimed to determine how variations in ketamine use and abstention from prior use affect neurocognitive function and psychological wellbeing.\n\n\nMETHOD\nWe assessed a total of 150 individuals: 30 frequent ketamine users, 30 infrequent ketamine users, 30 ex-ketamine users, 30 polydrug users and 30 controls who did not use illicit drugs. Cognitive tasks included spatial working memory, pattern recognition memory, the Stockings of Cambridge (a variant of the Tower of London task), simple vigilance and verbal and category fluency. Standardized questionnaires were used to assess psychological wellbeing. Hair analysis was used to verify group membership.\n\n\nRESULTS\nFrequent ketamine users were impaired on spatial working memory, pattern recognition memory, Stockings of Cambridge and category fluency but exhibited preserved verbal fluency and prose recall. There were no differences in the performance of the infrequent ketamine users or ex-users compared to the other groups. Frequent users showed increased delusional, dissociative and schizotypal symptoms which were also evident to a lesser extent in infrequent and ex-users. Delusional symptoms correlated positively with the amount of ketamine used currently by the frequent users.\n\n\nCONCLUSIONS\nFrequent ketamine use is associated with impairments in working memory, episodic memory and aspects of executive function as well as reduced psychological wellbeing. 'Recreational' ketamine use does not appear to be associated with distinct cognitive impairments although increased levels of delusional and dissociative symptoms were observed. As no performance decrements were observed in the ex-ketamine users, it is possible that the cognitive impairments observed in the frequent ketamine group are reversible upon cessation of ketamine use, although delusional symptoms persist.", "title": "" }, { "docid": "682921e4e2f000384fdcb9dc6fbaa61a", "text": "The use of Cloud Computing for computation offloading in the robotics area has become a field of interest today. The aim of this work is to demonstrate the viability of cloud offloading in a low level and intensive computing task: a vision-based navigation assistance of a service mobile robot. In order to do so, a prototype, running over a ROS-based mobile robot (Erratic by Videre Design LLC) is presented. The information extracted from on-board stereo cameras will be used by a private cloud platform consisting of five bare-metal nodes with AMD Phenom 965 × 4 CPU, with the cloud middleware Openstack Havana. The actual task is the shared control of the robot teleoperation, that is, the smooth filtering of the teleoperated commands with the detected obstacles to prevent collisions. All the possible offloading models for this case are presented and analyzed. Several performance results using different communication technologies and offloading models are explained as well. In addition to this, a real navigation case in a domestic circuit was done. The tests demonstrate that offloading computation to the Cloud improves the performance and navigation results with respect to the case where all processing is done by the robot.", "title": "" }, { "docid": "7681a78f2d240afc6b2e48affa0612c1", "text": "Web usage mining applies data mining procedures to analyze user access of Web sites. As with any KDD (knowledge discovery and data mining) process, WUM contains three main steps: preprocessing, knowledge extraction, and results analysis. We focus on data preprocessing, a fastidious, complex process. Analysts aim to determine the exact list of users who accessed the Web site and to reconstitute user sessions-the sequence of actions each user performed on the Web site. Intersites WUM deals with Web server logs from several Web sites, generally belonging to the same organization. Thus, analysts must reassemble the users' path through all the different Web servers that they visited. Our solution is to join all the log files and reconstitute the visit. Classical data preprocessing involves three steps: data fusion, data cleaning, and data structuration. Our solution for WUM adds what we call advanced data preprocessing. This consists of a data summarization step, which will allow the analyst to select only the information of interest. We've successfully tested our solution in an experiment with log files from INRIA Web sites.", "title": "" }, { "docid": "b0d5ec946a5c36500e3549779dc74329", "text": "Although several image quality measures have been proposed for fingerprints, no work has taken into account the differences among capture devices, and how these differences impact on the image quality. In this paper, several representative measures for assessing the quality fingerprint images are compared using an optical and a capacitive sensor. The capability to discriminate between images of different quality and its relationship with the verification performance is studied. We report differences depending on the sensor, and interesting relationships between sensor technology and features used for quality assessment are also pointed out.", "title": "" }, { "docid": "161fab4195de0d0358de9bd74f3c0805", "text": "Working with sensitive data is often a balancing act between privacy and integrity concerns. Consider, for instance, a medical researcher who has analyzed a patient database to judge the effectiveness of a new treatment and would now like to publish her findings. On the one hand, the patients may be concerned that the researcher's results contain too much information and accidentally leak some private fact about themselves; on the other hand, the readers of the published study may be concerned that the results contain too little information, limiting their ability to detect errors in the calculations or flaws in the methodology.\n This paper presents VerDP, a system for private data analysis that provides both strong integrity and strong differential privacy guarantees. VerDP accepts queries that are written in a special query language, and it processes them only if a) it can certify them as differentially private, and if b) it can prove the integrity of the result in zero knowledge. Our experimental evaluation shows that VerDP can successfully process several different queries from the differential privacy literature, and that the cost of generating and verifying the proofs is practical: for example, a histogram query over a 63,488-entry data set resulted in a 20 kB proof that took 32 EC2 instances less than two hours to generate, and that could be verified on a single machine in about one second.", "title": "" }, { "docid": "31338a16eca7c0f60b789c38f2774816", "text": "As a promising area in artificial intelligence, a new learning paradigm, called Small Sample Learning (SSL), has been attracting prominent research attention in the recent years. In this paper, we aim to present a survey to comprehensively introduce the current techniques proposed on this topic. Specifically, current SSL techniques can be mainly divided into two categories. The first category of SSL approaches can be called “concept learning”, which emphasizes learning new concepts from only few related observations. The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. The second category is called “experience learning”, which usually co-exists with the large sample learning manner of conventional machine learning. This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures. More extensive surveys on both categories of SSL techniques are introduced and some neuroscience evidences are provided to clarify the rationality of the entire SSL regime, and the relationship with human learning process. Some discussions on the main challenges and possible future research directions along this line are also presented.", "title": "" }, { "docid": "ff6420335374291508063663acb9dbe6", "text": "Many people are exposed to loss or potentially traumatic events at some point in their lives, and yet they continue to have positive emotional experiences and show only minor and transient disruptions in their ability to function. Unfortunately, because much of psychology's knowledge about how adults cope with loss or trauma has come from individuals who sought treatment or exhibited great distress, loss and trauma theorists have often viewed this type of resilience as either rare or pathological. The author challenges these assumptions by reviewing evidence that resilience represents a distinct trajectory from the process of recovery, that resilience in the face of loss or potential trauma is more common than is often believed, and that there are multiple and sometimes unexpected pathways to resilience.", "title": "" }, { "docid": "a412c41fe943120a513ad9b6fb70cb8b", "text": "Blockchains based on proofs of work (PoW) currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. The security of PoWbased blockchains requires that new transactions are verified, making a proper replication of the blockchain data in the system essential. While existing PoW mining protocols offer considerable incentives for workers to generate blocks, workers do not have any incentives to store the blockchain. This resulted in a sharp decrease in the number of full nodes that store the full blockchain, e.g., in Bitcoin, Litecoin, etc. However, the smaller is the number of replicas or nodes storing the replicas, the higher is the vulnerability of the system against compromises and DoS-attacks. In this paper, we address this problem and propose a novel solution, EWoK (Entangled proofs of WOrk and Knowledge). EWoK regulates in a decentralized-manner the minimum number of replicas that should be stored by tying replication to the only directly-incentivized process in PoW-blockchains—which is PoW itself. EWoK only incurs small modifications to existing PoW protocols, and is fully compliant with the specifications of existing mining hardware—which is likely to increase its adoption by the existing PoW ecosystem. EWoK plugs an efficient in-memory hash-based proof of knowledge and couples them with the standard PoW mechanism. We implemented EWoK and integrated it within commonly used mining protocols, such as GetBlockTemplate and Stratum mining; our results show that EWoK can be easily integrated within existing mining pool protocols and does not impair the mining efficiency.", "title": "" }, { "docid": "1ad65bf27c4c4037d85a97c0cead8c41", "text": "This study explores the issue of effectiveness within virtual teams — groups of people who work together although they are often dispersed across space, time, and/or organizational boundaries. Due to the recent trend towards corporate restructuring, which can, in part, be attributed to an increase in corporate layoffs, mergers and acquisitions, competition, and globalization, virtual teams have become critical for companies to survive. Globalization of the marketplace alone, for that matter, makes such distributed work groups the primary operating units needed to achieve a competitive advantage in this ever-changing business environment. In an effort to determine the factors that contribute to/inhibit the success of a virtual team, a survey was distributed to a total of eight companies in the high technology, agriculture, and professional services industries. Data was then collected from 67 individuals who comprised a total of 12 virtual teams from these companies. Results indicated that several factors were positively correlated to the effectiveness of the participating teams. The teams’ processes and team members’ relations presented the strongest relationships to team performance and team member satisfaction, while the selection procedures and executive leadership styles also exhibited moderate associations to these measures of effectiveness. Analysis of predictor variables such as the design process, other internal group dynamics, and additional external support mechanisms, however, depicted weaker relations. Although the connections between the teams’ tools and technologies and communication patterns and the teams’ effectiveness measures did not prove significant, content analysis of the participants’ narrative responses to questions regarding the greatest challenges to virtual teams suggested otherwise. Beyond the traditional strategies used to enhance a team’s effectiveness, further efforts directed towards the specific technology and communication-related issues that concern dispersed team members are needed to supplement the set of best practices identified in the current study. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "3b6de41443a56f619178427f80474c17", "text": "Most multi-view 3D reconstruction algorithms, especially when shapefrom-shading cues are used, assume that object appearance is predominantly diffuse. To alleviate this restriction, we introduce S2Dnet, a generative adversarial network for transferring multiple views of objects with specular reflection into diffuse ones, so that multi-view reconstruction methods can be applied more effectively. Our network extends unsupervised image-to-image translation to multiview “specular to diffuse” translation. To preserve object appearance across multiple views, we introduce a Multi-View Coherence loss (MVC) that evaluates the similarity and faithfulness of local patches after the view-transformation. Our MVC loss ensures that the similarity of local correspondences among multi-view images is preserved under the image-to-image translation. As a result, our network yields significantly better results than several single-view baseline techniques. In addition, we carefully design and generate a large synthetic training data set using physically-based rendering. During testing, our network takes only the raw glossy images as input, without extra information such as segmentation masks or lighting estimation. Results demonstrate that multi-view reconstruction can be significantly improved using the images filtered by our network. We also show promising performance on real world training and testing data.", "title": "" }, { "docid": "96ace1fc608d90ae53f903802bb60a10", "text": "Attributes offer useful mid-level features to interpret visual data. While most attribute learning methods are supervised by costly human-generated labels, we introduce a simple yet powerful unsupervised approach to learn and predict visual attributes directly from data. Given a large unlabeled image collection as input, we train deep Convolutional Neural Networks (CNNs) to output a set of discriminative, binary attributes often with semantic meanings. Specifically, we first train a CNN coupled with unsupervised discriminative clustering, and then use the cluster membership as a soft supervision to discover shared attributes from the clusters while maximizing their separability. The learned attributes are shown to be capable of encoding rich imagery properties from both natural images and contour patches. The visual representations learned in this way are also transferrable to other tasks such as object detection. We show other convincing results on the related tasks of image retrieval and classification, and contour detection.", "title": "" } ]
scidocsrr
a65211bf162923b2f34f6d2cfb79b8e1
A Plant Recognition Approach Using Shape and Color Features in Leaf Images
[ { "docid": "376d8a44c2d9e67536ee9beb2a8d1bd3", "text": "It is now well-established that k nearest-neighbour classi\"ers o!er a quick and reliable method of data classi\"cation. In this paper we extend the basic de\"nition of the standard k nearest-neighbour algorithm to include the ability to resolve con#icts when the highest number of nearest neighbours are found for more than one training class (model-1). We also propose model-2 of nearest-neighbour algorithm that is based on \"nding the nearest average distance rather than nearest maximum number of neighbours. These new models are explored using image understanding data. The models are evaluated on pattern recognition accuracy for correctly recognising image texture data of \"ve natural classes: grass, trees, sky, river re#ecting sky and river re#ecting trees. On noise contaminated test data, the new nearest neighbour models show very promising results for further studies. We evaluate their performance with increasing values of neighbours (k) and discuss their future in scene analysis research. CrownCopyright 2001 Published by Elsevier Science Ltd. on behalf of Pattern Recognition Society. All rights reserved.", "title": "" } ]
[ { "docid": "a412cff5999d0c257562335465a28323", "text": "In transfer learning, what and how to transfer are two primary issues to be addressed, as different transfer learning algorithms applied between a source and a target domain result in different knowledge transferred and thereby the performance improvement in the target domain. Determining the optimal one that maximizes the performance improvement requires either exhaustive exploration or considerable expertise. Meanwhile, it is widely accepted in educational psychology that human beings improve transfer learning skills of deciding what to transfer through meta-cognitive reflection on inductive transfer learning practices. Motivated by this, we propose a novel transfer learning framework known as Learning to Transfer (L2T) to automatically determine what and how to transfer are the best by leveraging previous transfer learning experiences. We establish the L2T framework in two stages: 1) we learn a reflection function encrypting transfer learning skills from experiences; and 2) we infer what and how to transfer are the best for a future pair of domains by optimizing the reflection function. We also theoretically analyse the algorithmic stability and generalization bound of L2T, and empirically demonstrate its superiority over several state-ofthe-art transfer learning algorithms.", "title": "" }, { "docid": "20574fb7271c35170a7601ea9681cc97", "text": "All intelligence relies on search for example, the search for an intelligent agent's next action. Search is only likely to succeed in resource-bounded agents if they have already been biased towards finding the right answer. In artificial agents, the primary source of bias is engineering. This dissertation describes an approach, Behavior-Oriented Design (BOD) for engineering complex agents. A complex agent is one that must arbitrate between potentially conflicting goals or behaviors. Behavior-oriented design builds on work in behavior-based and hybrid architectures for agents, and the object oriented approach to software engineering. The primary contributions of this dissertation are: 1. The BOD architecture: a modular architecture with each module providing specialized representations to facilitate learning. This includes one pre-specified module and representation for action selection or behavior arbitration. The specialized representation underlying BOD action selection is Parallel-rooted, Ordered, Slip-stack Hierarchical (POSH) reactive plans. 2. The BOD development process: an iterative process that alternately scales the agent's capabilities then optimizes the agent for simplicity, exploiting tradeoffs between the component representations. This ongoing process for controlling complexity not only provides bias for the behaving agent, but also facilitates its maintenance and extendibility. The secondary contributions of this dissertation include two implementations of POSH action selection, a procedure for identifying useful idioms in agent architectures and using them to distribute knowledge across agent paradigms, several examples of applying BOD idioms to established architectures, an analysis and comparison of the attributes and design trends of a large number of agent architectures, a comparison of biological (particularly mammalian) intelligence to artificial agent architectures, a novel model of primate transitive inference, and many other examples of BOD agents and BOD development. Thesis Supervisor: Lynn Andrea Stein Title: Associate Professor of Computer Science", "title": "" }, { "docid": "52b481885dc7ad62dc4e8b3e31b9e71e", "text": "In this paper, we propose a novel deep learning based video sa li ncy prediction method, named DeepVS. Specifically, we establ i h a large-scale eye-tracking database of videos (LEDOV), which includes 32 ubjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, w hich is composed of the objectness and motion subnets. In OM-CNN, cross-net m ask and hierarchical feature normalization are proposed to combine the sp atial features of the objectness subnet and the temporal features of the motion su b et. We further find from our database that there exists a temporal correlati on of human attention with a smooth saliency transition across video frames. We th us propose saliencystructured convolutional long short-term memory (SS-Conv LSTM) network, using the extracted features from OM-CNN as the input. Consequ ently, the interframe saliency maps of a video can be generated, which consid er both structured output with center-bias and cross-frame transitions of hum an attention maps. Finally, the experimental results show that DeepVS advances t he tate-of-the-art in video saliency prediction.", "title": "" }, { "docid": "78cae00cd81dc1f519d25ff6cb8f41c8", "text": "We present a technique for efficiently synthesizing images of atmospheric clouds using a combination of Monte Carlo integration and neural networks. The intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming aerosols make rendering of clouds---e.g. the characteristic silverlining and the \"whiteness\" of the inner body---challenging for methods based solely on Monte Carlo integration or diffusion theory. We approach the problem differently. Instead of simulating all light transport during rendering, we pre-learn the spatial and directional distribution of radiant flux from tens of cloud exemplars. To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source. The descriptor is input to a deep neural network that predicts the radiance function for each shading configuration. We make the key observation that progressively feeding the hierarchical descriptor into the network enhances the network's ability to learn faster and predict with higher accuracy while using fewer coefficients. We also employ a block design with residual connections to further improve performance. A GPU implementation of our method synthesizes images of clouds that are nearly indistinguishable from the reference solution within seconds to minutes. Our method thus represents a viable solution for applications such as cloud design and, thanks to its temporal stability, for high-quality production of animated content.", "title": "" }, { "docid": "5eb65797b9b5e90d5aa3968d5274ae72", "text": "Blockchains enable tamper-proof, ordered logging for transactional data in a decentralized manner over open-access, overlay peer-to-peer networks. In this paper, we propose a decentralized framework of proactive caching in a hierarchical wireless network based on blockchains. We employ the blockchain-based smart contracts to construct an autonomous content caching market. In the market, the cache helpers are able to autonomously adapt their caching strategies according to the market statistics obtained from the blockchain, and the truthfulness of trustless nodes are financially enforced by smart contract terms. Further, we propose an incentive-compatible consensus mechanism based on proof-of-stake to financially encourage the cache helpers to stay active in service. We model the interaction between the cache helpers and the content providers as a Chinese restaurant game. Based on the theoretical analysis regarding the Nash equilibrium of the game, we propose a decentralized strategy-searching algorithm using sequential best response. The simulation results demonstrate both the efficiency and reliability of the proposed equilibrium searching algorithm.", "title": "" }, { "docid": "e71402bed9c526d9152885ef86c30bb5", "text": "Narratives structure our understanding of the world and of ourselves. They exploit the shared cognitive structures of human motivations, goals, actions, events, and outcomes. We report on a computational model that is motivated by results in neural computation and captures fine-grained, context sensitive information about human goals, processes, actions, policies, and outcomes. We describe the use of the model in the context of a pilot system that is able to interpret simple stories and narrative fragments in the domain of international politics and economics. We identify problems with the pilot system and outline extensions required to incorporate several crucial dimensions of narrative structure.", "title": "" }, { "docid": "486bd67781bb1067aa4ff6009cdeecb7", "text": "BACKGROUND\nThere was less than satisfactory progress, especially in sub-Saharan Africa, towards child and maternal mortality targets of Millennium Development Goals (MDGs) 4 and 5. The main aim of this study was to describe the prevalence and determinants of essential new newborn care practices in the Lawra District of Ghana.\n\n\nMETHODS\nA cross-sectional study was carried out in June 2014 on a sample of 422 lactating mothers and their children aged between 1 and 12 months. A systematic random sampling technique was used to select the study participants who attended post-natal clinic in the Lawra district hospital.\n\n\nRESULTS\nOf the 418 newborns, only 36.8% (154) was judged to have had safe cord care, 34.9% (146) optimal thermal care, and 73.7% (308) were considered to have had adequate neonatal feeding. The overall prevalence of adequate new born care comprising good cord care, optimal thermal care and good neonatal feeding practices was only 15.8%. Mothers who attained at least Senior High Secondary School were 20.5 times more likely to provide optimal thermal care [AOR 22.54; 95% CI (2.60-162.12)], compared to women had no formal education at all. Women who received adequate ANC services were 4.0 times (AOR  =  4.04 [CI: 1.53, 10.66]) and 1.9 times (AOR  =  1.90 [CI: 1.01, 3.61]) more likely to provide safe cord care and good neonatal feeding as compared to their counterparts who did not get adequate ANC. However, adequate ANC services was unrelated to optimal thermal care. Compared to women who delivered at home, women who delivered their index baby in a health facility were 5.6 times more likely of having safe cord care for their babies (AOR = 5.60, Cl: 1.19-23.30), p = 0.03.\n\n\nCONCLUSIONS\nThe coverage of essential newborn care practices was generally low. Essential newborn care practices were positively associated with high maternal educational attainment, adequate utilization of antenatal care services and high maternal knowledge of newborn danger signs. Therefore, greater improvement in essential newborn care practices could be attained through proven low-cost interventions such as effective ANC services, health and nutrition education that should span from community to health facility levels.", "title": "" }, { "docid": "28cb5dee0fc91bd9c99ede29c6df0f9b", "text": "A crowdsourcing system, such as the Amazon Mechanical Turk (AMT), provides a platform for a large number of questions to be answered by Internet workers. Such systems have been shown to be useful to solve problems that are difficult for computers, including entity resolution, sentiment analysis, and image recognition. In this paper, we investigate the online task assignment problem: Given a pool of n questions, which of the k questions should be assigned to a worker? A poor assignment may not only waste time and money, but may also hurt the quality of a crowdsourcing application that depends on the workers' answers. We propose to consider quality measures (also known as evaluation metrics) that are relevant to an application during the task assignment process. Particularly, we explore how Accuracy and F-score, two widely-used evaluation metrics for crowdsourcing applications, can facilitate task assignment. Since these two metrics assume that the ground truth of a question is known, we study their variants that make use of the probability distributions derived from workers' answers. We further investigate online assignment strategies, which enables optimal task assignments. Since these algorithms are expensive, we propose solutions that attain high quality in linear time. We develop a system called the Quality-Aware Task Assignment System for Crowdsourcing Applications (QASCA) on top of AMT. We evaluate our approaches on five real crowdsourcing applications. We find that QASCA is efficient, and attains better result quality (of more than 8% improvement) compared with existing methods.", "title": "" }, { "docid": "044a73d9db2f61dc9b4f9de0bdaa1b3f", "text": "Traditionally employed human-to-human and human-to-machine communication has recently been replaced by a new trend known as the Internet of things (IoT). IoT enables device-to-device communication without any human intervention, hence, offers many challenges. In this paradigm, machine’s self-sustainability due to limited energy capabilities presents a great challenge. Therefore, this paper proposed a low-cost energy harvesting device using rectenna to mitigate the problem in the areas where battery constraint issues arise. So, an energy harvester is designed, optimized, fabricated, and characterized for energy harvesting and IoT applications which simply recycles radio-frequency (RF) energy at 2.4 GHz, from nearby Wi-Fi/WLAN devices and converts them to useful dc power. The physical model comprises of antenna, filters, rectifier, and so on. A rectangular patch antenna is designed and optimized to resonate at 2.4 GHz using the well-known transmission-line model while the band-pass and low-pass filters are designed using lumped components. Schottky diode (HSMS-2820) is used for rectification. The circuit is designed and fabricated using the low-cost FR4 substrate (<inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula> = 16 mm and <inline-formula> <tex-math notation=\"LaTeX\">$\\varepsilon _{r} = 4.6$ </tex-math></inline-formula>) having the fabricated dimensions of 285 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times \\,\\,90$ </tex-math></inline-formula> mm. Universal software radio peripheral and GNU Radio are employed to measure the received RF power, while similar measurements are carried out using R&S spectrum analyzer for validation. The received measured power is −64.4 dBm at the output port of the rectenna circuit. Hence, our design enables a pervasive deployment of self-operable next-generation IoT devices.", "title": "" }, { "docid": "c0c1303f7038011c7f26151c3ba743be", "text": "This article is motivated by the practical problem of highwa y traffic estimation using velocity measurements from GPS enabled mobile devices such as cell phones. In order to simplify the estimation procedure, a velocity model for highway traffic is constructed, which results in a d ynamical system in which observation the operator is linear. This article presents a new scalar hyperbolic partial differential equation(PDE) model for traffic velocity evolution on highways, based on the seminal Lighthill-Whitham-Richards(LWR) PDE for density. Equivalence of the solution of the new velocity PDE and the solution of the LW R PDE is shown for quadratic flux functions. Because this equivalence does not hold for general flux funct io s, a discretized model of velocity evolution based on the Godunov scheme applied to the LWR PDE is proposed. Usin g an explicit instantiation of the weak boundary conditions of the PDE, the discrete velocity evolution mode l is generalized to a network, thus making the model applicable to arbitrary highway networks. The resulting ve locity model is a nonlinear and nondifferentiable discrete time dynamical system with a linear observation operator, w hich enables the use of a Monte-Carlo based ensemble Kalman filtering data assimilation algorithm. Accuracy of t he model and estimation technique is validated on experimental data obtained from a large-scale field experim ent.", "title": "" }, { "docid": "09b399d6416c1821bc4635690559cdfa", "text": "One of the most complicated academic endeavours in transmission pedagogies is to generate democratic participation of all students and public expression of silenced voices. While the potential of mobile phones, particularly mobile instant messaging (MIM), to trigger broadened academic participation is increasingly acknowledged in literature, integrating MIM into classrooms and out-of-the-classroom tasks has often been confronted with academic resistance. Academic uncertainty about MIM is often predicated on its perceived distractive nature and potential to trigger off-task social behaviours. This paper argues that MIM has potential to create alternative dialogic spaces for student collaborative engagements in informal contexts, which can gainfully transform teaching and learning. An instance of a MIM, WhatsApp, was adopted for an information technology course at a South African university with a view to heighten lecturer–student and peer-based participation, and enhance pedagogical delivery and inclusive learning in formal (lectures) and informal spaces. The findings suggest heightened student participation, the fostering of learning communities for knowledge creation and progressive shifts in the lecturer’s mode of pedagogical delivery. However, the concomitant challenge of using MIM included mature adults’ resentment of the merging of academic and family life occasioned by WhatsApp consultations after hours. Students also expressed ambivalence about MIM’s wide-scale roll-out in different academic programmes. Introduction The surging popularity of mobile devices as technologies that support collaborative learning has been widely debated in recent years (Echeverría et al, 2011; Hwang, Huang & Wu, 2011; Koole, 2009). Echeverría et al (2011) articulate the multiple academic purposes of mobile devices as follows: access to content, supplementation of institutionally provided content and acquisition of specific information, fostering interaction and information sharing among students. Despite this tremendous potential of mobile phones to activate deep student engagement with content, mobile instant messaging (MIM) remains one of the least exploited functionalities of mobile devices in higher educational institutions (HIEs). The academic uncertainty about MIM at African HIEs is British Journal of Educational Technology Vol 44 No 4 2013 544–561 doi:10.1111/bjet.12057 © 2013 British Educational Research Association possibly explained by (1) the distractive nature of text messages, (2) limited academic conceptualisation of how textual resources can be optimally integrated into mainstream instructional practices and (3) uncertainties about the academic rigour of discussions generated via text messages. Notwithstanding these academic concerns about MIM, this social practice promotes subscriptions to information, builds social networks, supports brainstorming and fosters mutual understanding through sharing of assets like opinions (Hwang et al, 2011). Therefore, MIM enhances productive communication among learning clusters through the sharing of mutual intentions, social objects, learning resources and needs. Practitioner Notes What is already known about this topic • Mobile devices are productive technologies with potential to foster informal collaborative learning. • Mobile phones are useful tools for the transmission of basic content and the supplementation of institutionally generated content. • Academic potential of mobile instant messaging (MIM) has been suboptimally exploited in higher education in general and South African higher education in particular. What this paper adds • Underutilisation of MIM can be attributed to lecturers’ limited conceptualisation of how to integrate textual resources into mainstream instructional practices and their uncertainties about the academic rigour of discussions generated via text messages. • Lecturer’s use of an instance of MIM, WhatsApp, for peer-based engagement in an information technology course contributed to peer-based coaching and informal work teams, which transformed his hierarchical models of teaching. • WhatsApp impacted student participation by promoting social constructivist learning through spontaneous discussions, boosting student self-confidence to engage anonymously and enhancing the sharing of collectively generated resources across multiple spaces. • WhatsApp’s supplementation of student academic material after hours bridged the information divide for geographically remote students who had limited access to academic resources after work hours. • Mature, married students conceived the provision of academic materials after hours via WhatsApp as disruptive of their family life as quality family time became seamlessly integrated into academic pursuits. Implications for practice and/or policy • Academic use of WhatsApp should consider the additional responsibilities that it requires—need to contribute to an online learning community, expectations to interact at odd hours, and the pressure to read and reflect on peer-generated postings. • Interaction after hours should be well timed and streamlined to account for mature students’ competing family commitments, and additional software that signals and triggers to their learning clusters their availability for interaction should be installed on WhatsApp. • Lecturers should harvest (mine) collectively generated resources on WhatsApp to support the institutional memory and sustained student meaningful interaction. Using instant messaging to leverage participation 545 © 2013 British Educational Research Association Despite the aforementioned academic incentives, what is least understood in literature is MIM’s influence on pedagogy (student academic participation, lecturers’ ways of instructional delivery) and digital inclusion of learners from diverse academic backgrounds. The rationale of this paper, therefore, is twofold: (1) to explore the pedagogical value of a MIM service, WhatsApp, particularly its potential to enhance academic participation of all learners and transform lecturers’ teaching practices and (2) examine its capacity to breach the digital divide among learners in geographically dispersed informal contexts. An informing framework comprising WhatsAppenabled lecturer–student and student–peer consultations was drawn upon to explore the potential of MIM to promote equitable participation in diverse informal spaces. The rest of the paper is structured as follows: a literature review and theoretical framework are articulated, research questions and methodology are presented, findings are discussed and a conclusion is given. Literature review M-Learning For Kukulska-Hulme and Traxler (2005), mobile learning (m-learning) is generally about enabling flexible learning through mobile devices. However, new constructions of m-learning embrace the mobility of the context of interaction that is mediated by technologies. The Centre for Digital Education (2011) suggests that a new direction in m-learning enables lecturer mobility including mobile device-mediated creation of learning materials on the spot and in the field. This new approach foregrounds a transitory context in which all learning resources (interacting peers, lecturers, pedagogical content, the enabling technology) are all “on-the-move.” Consequently, m-learning potentially breaches the spatial, temporal and time zones by bringing educational resources at the disposal of the roaming learner in real time. MIM MIM is an asynchronous communication tool that works on wireless connections, handhelds and desktop devices via the Internet and allows students and peers to chat in real time (Dourando, Parker & de la Harpe, 2007). It fosters unique social presence that is qualitatively and visually distinct from email systems. As Quan-Haase, Cothrel and Wellman (2005) suggest, IM applications differ from emails primarily in their focus on the immediate delivery of messages through (1) a “pop-up” mechanism to display messages the moment they are received, (2) a user-generated visible list of other users (“buddy list”) and (3) a mechanism for indicating when “buddies” are online and available to receive messages. By providing a detailed account of the online presence of users (online, offline, in a meeting, away), MIM provides a rich context for open and transparent interaction that alerts communicants to the temporal and time-span constraints of the interaction. However, what remains unknown are the influences of MIM social presence on lecturers’ instructional practices and the digital inclusion of students with varied exposure and experience in MIM academic usage. Cameron and Webster’s (2005) study on IM usage by 19 employees from four organisations suggests that critical mass is among the core explanations for the widespread adoption of IM. IM was considered appropriate when senders wanted to emphasise the intentionality of messages, elicit quick responses and enhance efficient communication (ibid.). What has not been explored, nevertheless, is the influence of pedagogical intentionality on the meaningful academic participation of underprepared learners. Sotillo’s (2006) study explored English as Second Language (ESL) learners’ negotiation of interaction and collaborative problem solving using IM. IM environment rendered interactions that facilitated student awareness of grammatical structures of second language communication. Although the study examined technology-mediated interactions of students with varied linguistic competences, it did not interrogate the relationship between MIM and digital inclusion of students. 546 British Journal of Educational Technology Vol 44 No 4 2013 © 2013 British Educational Research Association The educational benefits of MIM are as follows: encouraging contact between students and lecturers, developing student-based reciprocal interactions and academic cooperation, promoting active learning, providing instant feedback, emphasising", "title": "" }, { "docid": "b3a775719d87c3837de671001c77568b", "text": "Regularization of Deep Neural Networks (DNNs) for the sake of improving their generalization capability is important and challenging. The development in this line benefits theoretical foundation of DNNs and promotes their usability in different areas of artificial intelligence. In this paper, we investigate the role of Rademacher complexity in improving generalization of DNNs and propose a novel regularizer rooted in Local Rademacher Complexity (LRC). While Rademacher complexity is well known as a distribution-free complexity measure of function class that help boost generalization of statistical learning methods, extensive study shows that LRC, its counterpart focusing on a restricted function class, leads to sharper convergence rates and potential better generalization given finite training sample. Our LRC based regularizer is developed by estimating the complexity of the function class centered at the minimizer of the empirical loss of DNNs. Experiments on various types of network architecture demonstrate the effectiveness of LRC regularization in improving generalization. Moreover, our method features the state-of-the-art result on the CIFAR-10 dataset with network architecture found by neural architecture search.", "title": "" }, { "docid": "827d7c359eadf40e8103c6c534b6e73f", "text": "Making accurate recommendations for users has become an important function of e-commerce system with the rapid growth of WWW. Conventional recommendation systems usually recommend similar objects, which are of the same type with the query object without exploring the semantics of different similarity measures. In this paper, we organize objects in the recommendation system as a heterogeneous network. Through employing a path-based relevance measure to evaluate the relatedness between any-typed objects and capture the subtle semantic containing in each path, we implement a prototype system (called HeteRecom) for semantic based recommendation. HeteRecom has the following unique properties: (1) It provides the semantic-based recommendation function according to the path specified by users. (2) It recommends the similar objects of the same type as well as related objects of different types. We demonstrate the effectiveness of our system with a real-world movie data set.", "title": "" }, { "docid": "bdd1c64962bfb921762259cca4a23aff", "text": "Ever since the emergence of social networking sites (SNSs), it has remained a question without a conclusive answer whether SNSs make people more or less lonely. To achieve a better understanding, researchers need to move beyond studying overall SNS usage. In addition, it is necessary to attend to personal attributes as potential moderators. Given that SNSs provide rich opportunities for social comparison, one highly relevant personality trait would be social comparison orientation (SCO), and yet this personal attribute has been understudied in social media research. Drawing on literature of psychosocial implications of social media use and SCO, this study explored associations between loneliness and various Instagram activities and the role of SCO in this context. A total of 208 undergraduate students attending a U.S. mid-southern university completed a self-report survey (Mage = 19.43, SD = 1.35; 78 percent female; 57 percent White). Findings showed that Instagram interaction and Instagram browsing were both related to lower loneliness, whereas Instagram broadcasting was associated with higher loneliness. SCO moderated the relationship between Instagram use and loneliness such that Instagram interaction was related to lower loneliness only for low SCO users. The results revealed implications for healthy SNS use and the importance of including personality traits and specific SNS use patterns to disentangle the role of SNS use in psychological well-being.", "title": "" }, { "docid": "5c5e9a93b4838cbebd1d031a6d1038c4", "text": "Live migration of virtual machines (VMs) is key feature of virtualization that is extensively leveraged in IaaS cloud environments: it is the basic building block of several important features, such as load balancing, pro-active fault tolerance, power management, online maintenance, etc. While most live migration efforts concentrate on how to transfer the memory from source to destination during the migration process, comparatively little attention has been devoted to the transfer of storage. This problem is gaining increasing importance: due to performance reasons, virtual machines that run large-scale, data-intensive applications tend to rely on local storage, which poses a difficult challenge on live migration: it needs to handle storage transfer in addition to memory transfer. This paper proposes a memory migration independent approach that addresses this challenge. It relies on a hybrid active push / prioritized prefetch strategy, which makes it highly resilient to rapid changes of disk state exhibited by I/O intensive workloads. At the same time, it is minimally intrusive in order to ensure a maximum of portability with a wide range of hypervisors. Large scale experiments that involve multiple simultaneous migrations of both synthetic benchmarks and a real scientific application show improvements of up to 10x faster migration time, 10x less bandwidth consumption and 8x less performance degradation over state-of-art.", "title": "" }, { "docid": "c28f10a6c74dc20df6f1ef55924bab2b", "text": "Existing studies on the maintenance of open source projects focus primarily on the analyses of the overall maintenance of the projects and less on specific categories like the corrective maintenance. This paper presents results from an empirical study of bug reports from an open source project, identifies user participation in the corrective maintenance process through bug reports, and constructs a model to predict the corrective maintenance effort for the project in terms of the time taken to correct faults. Our study focuses on 72482 bug reports from over nine releases of Ubuntu, a popular Linux distribution. We present three main results 1) 95% of the bug reports are corrected by people participating in groups of size ranging from 1 to 8 people, 2) there is a strong linear relationship (about 92%) between the the number of people participating in a bug report and the time taken to correct it, 3) a linear model can be used to predict the time taken to correct bug reports.", "title": "" }, { "docid": "97af37cd244646609cf60dd386233186", "text": "Sign language is vital for facilitating communication between hearing impaired and the rest of society. Researchers in Sign language recognition had tailored completely different sensors to capture hand signs. Gloves, digital cameras, depth cameras and Kinect were used instead in most systems. Owing to signs closeness ,input accuracy is a terribly essential constraint to achieve a high recognition accuracy. Our aim is to design a Sign Language to Speech Translation system for ISL(i.e. Indian Sign Language) based on a brand new digital 3D motion detector referred to as Leap Motion, that consists of two inbuilt cameras and three infrared sensors that capture 3-D dynamic hand gestures. The palm-sized Leap Motion sensor provides far more portable and economical solution than Cyber glove or Microsoft Kinect employed in existing studies . This sensor tackles the major issues in vision-based systems like skin color, lighting etc... The planned system will make use of DTW algorithm as a classifier for converting hand gestures into an appropriate text as well as an audible speech.", "title": "" }, { "docid": "35f6cf610dcb5bc08f13f62614aae3bd", "text": "The problem of determination of the accurate distribution of the return current in AT (autotransformer) electric traction systems (supplied at 2/spl times/25 kV) for High Speed Railways is considered. The path of the traction return current flowing from the rolling stock axles back to the supply (i.e., substation) is composed of the traction rails and additional earth potential conductors. The overhead supply conductors in contact with the train pantograph are connected to a symmetrical circuit (the feeder) with the purpose of current balancing. This arrangement and its influence on current to earth are considered: the return current divides among rails (as signalling disturbing current) and earth, depending on the value of the electric parameters of the system and the earth and on the circuit arrangement and on the relative position of system devices. The amplitude (as a percentage of the total return current) of the disturbing current may be high enough to cause interference to signalling. This work investigates the behavior of the return current in AT electric railway systems, on the basis of a reference system for the variation of the most important electrical parameters.", "title": "" }, { "docid": "a526cd280b4d15d3f2a3acbed60afae3", "text": "Vehicular communications, though a reality, must continue to evolve to support higher throughput and, above all, ultralow latency to accommodate new use cases, such as the fully autonomous vehicle. Cybersecurity must be assured since the risk of losing control of vehicles if a country were to come under attack is a matter of national security. This article presents the technological enablers that ensure security requirements are met. Under the umbrella of a dedicated network slice, this article proposes the use of content-centric networking (CCN), instead of conventional transmission control protocol/Internet protocol (TCP/IP) routing and permissioned blockchains that allow for the dynamic control of the source reliability, and the integrity and validity of the information exchanged.", "title": "" }, { "docid": "a0b862a758c659b62da2114143bf7687", "text": "The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevel-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.", "title": "" } ]
scidocsrr
ef039bc2f811f4663361796a2806fb6b
Lambda Obfuscation
[ { "docid": "ea5697d417fe154be77d941c19d8a86e", "text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.", "title": "" } ]
[ { "docid": "8dd540b33035904f63c67b57d4c97aa3", "text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.", "title": "" }, { "docid": "78e712f5d052c08a7dcbc2ee6fd92f96", "text": "Bug report contains a vital role during software development, However bug reports belongs to different categories such as performance, usability, security etc. This paper focuses on security bug and presents a bug mining system for the identification of security and non-security bugs using the term frequency-inverse document frequency (TF-IDF) weights and naïve bayes. We performed experiments on bug report repositories of bug tracking systems such as bugzilla and debugger. In the proposed approach we apply text mining methodology and TF-IDF on the existing historic bug report database based on the bug s description to predict the nature of the bug and to train a statistical model for manually mislabeled bug reports present in the database. The tool helps in deciding the priorities of the incoming bugs depending on the category of the bugs i.e. whether it is a security bug report or a non-security bug report, using naïve bayes. Our evaluation shows that our tool using TF-IDF is giving better results than the naïve bayes method.", "title": "" }, { "docid": "eec15a5d14082d625824452bd070ec38", "text": "Food waste is a major environmental issue. Expired products are thrown away, implying that too much food is ordered compared to what is sold and that a more accurate prediction model is required within grocery stores. In this study the two prediction models Long Short-Term Memory (LSTM) and Autoregressive Integrated Moving Average (ARIMA) were compared on their prediction accuracy in two scenarios, given sales data for different products, to observe if LSTM is a model that can compete against the ARIMA model in the field of sales forecasting in retail. In the first scenario the models predict sales for one day ahead using given data, while they in the second scenario predict each day for a week ahead. Using the evaluation measures RMSE and MAE together with a t-test the results show that the difference between the LSTM and ARIMA model is not of statistical significance in the scenario of predicting one day ahead. However when predicting seven days ahead, the results show that there is a statistical significance in the difference indicating that the LSTM model has higher accuracy. This study therefore concludes that the LSTM model is promising in the field of sales forecasting in retail and able to compete against the ARIMA model.", "title": "" }, { "docid": "d7a75e98a1faa39262c50ef03edc8708", "text": "Executive Overview The strategic leadership of ethical behavior in business can no longer be ignored. Executives must accept the fact that the moral impact of their leadership presence and behaviors will rarely, if ever, be neutral. In the leadership capacity, executives have great power to shift the ethics mindfulness of organizational members in positive as well as negative directions. Rather than being left to chance, this power to serve as ethics leaders must be used to establish a social context within which positive self-regulation of ethical behavior becomes a clear and compelling organizational norm and in which people act ethically as a matter of routine. This article frames the responsibility for strategic leadership of ethical behavior on three premises: (1) It must be done—a stakeholder analysis of the total costs of ethical failures confirms the urgency for ethics change; (2) It can be done—exemplars show that a compelling majority of an organization’s membership can be influenced to make ethical choices; (3) It is sustainable—integrity programs help build and confirm corporate cultures in which principled actions and ethics norms predominate. ........................................................................................................................................................................", "title": "" }, { "docid": "5a07e2b2a12e394174dbed1534085713", "text": "BACKGROUND\nGuidance in the United States and United Kingdom has included cognitive behavior therapy for psychosis (CBTp) as a preferred therapy. But recent advances have widened the CBTp targets to other symptoms and have different methods of provision, eg, in groups.\n\n\nAIM\nTo explore the effect sizes of current CBTp trials including targeted and nontargeted symptoms, modes of action, and effect of methodological rigor.\n\n\nMETHOD\nThirty-four CBTp trials with data in the public domain were used as source data for a meta-analysis and investigation of the effects of trial methodology using the Clinical Trial Assessment Measure (CTAM).\n\n\nRESULTS\nThere were overall beneficial effects for the target symptom (33 studies; effect size = 0.400 [95% confidence interval [CI] = 0.252, 0.548]) as well as significant effects for positive symptoms (32 studies), negative symptoms (23 studies), functioning (15 studies), mood (13 studies), and social anxiety (2 studies) with effects ranging from 0.35 to 0.44. However, there was no effect on hopelessness. Improvements in one domain were correlated with improvements in others. Trials in which raters were aware of group allocation had an inflated effect size of approximately 50%-100%. But rigorous CBTp studies showed benefit (estimated effect size = 0.223; 95% CI = 0.017, 0.428) although the lower end of the CI should be noted. Secondary outcomes (eg, negative symptoms) were also affected such that in the group of methodologically adequate studies the effect sizes were not significant.\n\n\nCONCLUSIONS\nAs in other meta-analyses, CBTp had beneficial effect on positive symptoms. However, psychological treatment trials that make no attempt to mask the group allocation are likely to have inflated effect sizes. Evidence considered for psychological treatment guidance should take into account specific methodological detail.", "title": "" }, { "docid": "f5ce4a13a8d081243151e0b3f0362713", "text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the", "title": "" }, { "docid": "15054343b43ae67e877e5bf0a9b93afd", "text": "We discuss Hinton's (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximization procedure of Dempster, Laird, and Rubin (1977).", "title": "" }, { "docid": "bb6193287aa2733e1606ab8761e1e7dd", "text": "A grating coupler having asymmetric grating trenches for low back reflections is experimentally demonstrated. Conventional and asymmetric-trench grating couplers have been fabricated on a silicon nitride waveguide platform. Both grating couplers have fully etched trenches, which normally result in higher back reflections than shallow-etched trenches. For evaluating the back reflection characteristics, test structures based on a 3-dB multimode interference power splitter have been measured and the backreflection has been extracted from each grating coupler using an equivalent optical circuit. The designed grating coupler has no critical penalty (<0.2 dB) in coupling efficiency and ~5 dB lower back reflections than a conventional grating coupler design. Using ray transfer matrix modeling, further improvements to the back reflection characteristics of the asymmetric grating coupler are expected.", "title": "" }, { "docid": "bfc663107f88522f438bd173db2b85ce", "text": "While much progress has been made in how to encode a text sequence into a sequence of vectors, less attention has been paid to how to aggregate these preceding vectors (outputs of RNN/CNN) into fixed-size encoding vector. Usually, a simple max or average pooling is used, which is a bottom-up and passive way of aggregation and lack of guidance by task information. In this paper, we propose an aggregation mechanism to obtain a fixed-size encoding with a dynamic routing policy. The dynamic routing policy is dynamically deciding that what and how much information need be transferred from each word to the final encoding of the text sequence. Following the work of Capsule Network, we design two dynamic routing policies to aggregate the outputs of RNN/CNN encoding layer into a final encoding vector. Compared to the other aggregation methods, dynamic routing can refine the messages according to the state of final encoding vector. Experimental results on five text classification tasks show that our method outperforms other aggregating models by a significant margin. Related source code is released on our github page1.", "title": "" }, { "docid": "4872da79e7d01e8bb2a70ab17c523118", "text": "In recent years, social media has become a customer touch-point for the business functions of marketing, sales and customer service. We aim to show that intention analysis might be useful to these business functions and that it can be performed effectively on short texts (at the granularity level of a single sentence). We demonstrate a scheme of categorization of intentions that is amenable to automation using simple machine learning techniques that are language-independent. We discuss the grounding that this scheme of categorization has in speech act theory. In the demonstration we go over a number of usage scenarios in an attempt to show that the use of automatic intention detection tools would benefit the business functions of sales, marketing and service. We also show that social media can be used not just to convey pleasure or displeasure (that is, to express sentiment) but also to discuss personal needs and to report problems (to express intentions). We evaluate methods for automatically discovering intentions in text, and establish that it is possible to perform intention analysis on social media with an accuracy of 66.97%± 0.10%.", "title": "" }, { "docid": "33e41cf93ec8bb99c215dbce4afc34f8", "text": "This paper presents a general, trainable system for object detection in unconstrained, cluttered scenes. The system derives much of its power from a representation that describes an object class in terms of an overcomplete dictionary of local, oriented, multiscale intensity differences between adjacent regions, efficiently computable as a Haar wavelet transform. This example-based learning approach implicitly derives a model of an object class by training a support vector machine classifier using a large set of positive and negative examples. We present results on face, people, and car detection tasks using the same architecture. In addition, we quantify how the representation affects detection performance by considering several alternate representations including pixels and principal components. We also describe a real-time application of our person detection system as part of a driver assistance system.", "title": "" }, { "docid": "cda5c6908b4f52728659f89bb082d030", "text": "Until a few years ago the diagnosis of hair shaft disorders was based on light microscopy or scanning electron microscopy on plucked or cut samples of hair. Dermatoscopy is a new fast, noninvasive, and cost-efficient technique for easy in-office diagnosis of all hair shaft abnormalities including conditions such as pili trianguli and canaliculi that are not recognizable by examining hair shafts under the light microscope. It can also be used to identify disease limited to the eyebrows or eyelashes. Dermatoscopy allows for fast examination of the entire scalp and is very helpful to identify the affected hair shafts when the disease is focal.", "title": "" }, { "docid": "444364c2ab97bef660ab322420fc5158", "text": "We present a telerobotics research platform that provides complete access to all levels of control via open-source electronics and software. The electronics employs an FPGA to enable a centralized computation and distributed I/O architecture in which all control computations are implemented in a familiar development environment (Linux PC) and low-latency I/O is performed over an IEEE-1394a (FireWire) bus at speeds up to 400 Mbits/sec. The mechanical components are obtained from retired first-generation da Vinci ® Surgical Systems. This system is currently installed at 11 research institutions, with additional installations underway, thereby creating a research community around a common open-source hardware and software platform.", "title": "" }, { "docid": "9bf99d48bc201147a9a9ad5af547a002", "text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.", "title": "" }, { "docid": "e173580f0dd327c78fd0b16b234112a1", "text": "Multi-view data is very popular in real-world applications, as different view-points and various types of sensors help to better represent data when fused across views or modalities. Samples from different views of the same class are less similar than those with the same view but different class. We consider a more general case that prior view information of testing data is inaccessible in multi-view learning. Traditional multi-view learning algorithms were designed to obtain multiple view-specific linear projections and would fail without this prior information available. That was because they assumed the probe and gallery views were known in advance, so the correct view-specific projections were to be applied in order to better learn low-dimensional features. To address this, we propose a Low-Rank Common Subspace (LRCS) for multi-view data analysis, which seeks a common low-rank linear projection to mitigate the semantic gap among different views. The low-rank common projection is able to capture compatible intrinsic information across different views and also well-align the within-class samples from different views. Furthermore, with a low-rank constraint on the view-specific projected data and that transformed by the common subspace, the within-class samples from multiple views would concentrate together. Different from the traditional supervised multi-view algorithms, our LRCS works in a weakly supervised way, where only the view information gets observed. Such a common projection can make our model more flexible when dealing with the problem of lacking prior view information of testing data. Two scenarios of experiments, robust subspace learning and transfer learning, are conducted to evaluate our algorithm. Experimental results on several multi-view datasets reveal that our proposed method outperforms state-of-the-art, even when compared with some supervised learning methods.", "title": "" }, { "docid": "6ff034e2ff0d54f7e73d23207789898d", "text": "This letter presents two high-gain, multidirector Yagi-Uda antennas for use within the 24.5-GHz ISM band, realized through a multilayer, purely additive inkjet printing fabrication process on a flexible substrate. Multilayer material deposition is used to realize these 3-D antenna structures, including a fully printed 120- μm-thick dielectric substrate for microstrip-to-slotline feeding conversion. The antennas are fabricated, measured, and compared to simulated results showing good agreement and highlighting the reliable predictability of the printing process. An endfire realized gain of 8 dBi is achieved within the 24.5-GHz ISM band, presenting the highest-gain inkjet-printed antenna at this end of the millimeter-wave regime. The results of this work further demonstrate the feasibility of utilizing inkjet printing for low-cost, vertically integrated antenna structures for on-chip and on-package integration throughout the emerging field of high-frequency wireless electronics.", "title": "" }, { "docid": "687103370a842230045fbd88ea1ebbf4", "text": "Annotations played a major role in Classics since the very beginning of the discipline. Some of the first attested examples of philological work, the so-called scholia, were in fact marginalia, namely comments written at the margins of a text. Over the centuries this kind of scholarship evolved until it became a genre on its own, the classical commentary, thus moving away from the text with the result that philologists had to devise a solution to linking together the commented and the commenting text. The solution to this problem is the system of canonical citations, a special kind of bibliographic references that are at the same time very precise and highly interoperable.\n In this paper we present HuCit, an ontology that models in depth the semantics of canonical citations. We discuss how it can be used to a) support the automatic extraction of canonical citations from texts and b) to publish them in machine-readable format on the Semantic Web. Finally, we describe how HuCit's machine-generated citation data can also be expressed as annotations by using the Open Annotation Collaboration (OAC) ontology, to the aim of increasing reuse and semantic interoperability.", "title": "" }, { "docid": "7bdc8d864e370f96475dc7d5078b053c", "text": "Nowadays, there is a trend to design complex, yet secure systems. In this context, the Trusted Execution Environment (TEE) was designed to enrich the previously defined trusted platforms. TEE is commonly known as an isolated processing environment in which applications can be securely executed irrespective of the rest of the system. However, TEE still lacks a precise definition as well as representative building blocks that systematize its design. Existing definitions of TEE are largely inconsistent and unspecific, which leads to confusion in the use of the term and its differentiation from related concepts, such as secure execution environment (SEE). In this paper, we propose a precise definition of TEE and analyze its core properties. Furthermore, we discuss important concepts related to TEE, such as trust and formal verification. We give a short survey on the existing academic and industrial ARM TrustZone-based TEE, and compare them using our proposed definition. Finally, we discuss some known attacks on deployed TEE as well as its wide use to guarantee security in diverse applications.", "title": "" }, { "docid": "7ad4f52279e85f8e20239e1ea6c85bbb", "text": "One of the most exciting but challenging endeavors in music research is to develop a computational model that comprehends the affective content of music signals and organizes a music collection according to emotion. In this paper, we propose a novel acoustic emotion Gaussians (AEG) model that defines a proper generative process of emotion perception in music. As a generative model, AEG permits easy and straightforward interpretations of the model learning processes. To bridge the acoustic feature space and music emotion space, a set of latent feature classes, which are learned from data, is introduced to perform the end-to-end semantic mappings between the two spaces. Based on the space of latent feature classes, the AEG model is applicable to both automatic music emotion annotation and emotion-based music retrieval. To gain insights into the AEG model, we also provide illustrations of the model learning process. A comprehensive performance study is conducted to demonstrate the superior accuracy of AEG over its predecessors, using two emotion annotated music corpora MER60 and MTurk. Our results show that the AEG model outperforms the state-of-the-art methods in automatic music emotion annotation. Moreover, for the first time a quantitative evaluation of emotion-based music retrieval is reported.", "title": "" }, { "docid": "2a1335003528b2da0b0471096df4dade", "text": "Data mining concerns theories, methodologies, and in particular, computer systems for knowledge extraction or mining from large amounts of data. Association rule mining is a general purpose rule discovery scheme. It has been widely used for discovering rules in medical applications. The diagnosis of diseases is a significant and tedious task in medicine. The detection of heart disease from various factors or symptoms is an issue which is not free from false presumptions often accompanied by unpredictable effects. Thus the effort to utilize knowledge and experience of numerous specialists and clinical screening data of patients collected in databases to facilitate the diagnosis process is considered a valuable option. In this paper, we presented an efficient approach for the prediction of heart attack risk levels from the heart disease database. Firstly, the heart disease database is clustered using the K-means clustering algorithm, which will extract the data relevant to heart attack from the database. This approach allows mastering the number of fragments through its k parameter. Subsequently the frequent patterns are mined from the extracted data, relevant to heart disease, using the MAFIA (Maximal Frequent Itemset Algorithm) algorithm. The machine learning algorithm is trained with the selected significant patterns for the effective prediction of heart attack. We have employed the ID3 algorithm as the training algorithm to show level of heart attack with the decision tree. The results showed that the designed prediction system is capable of predicting the heart attack effectively.", "title": "" } ]
scidocsrr
f90f5710a6b1a8dffc200b6d0eb1a509
Clustering of the self-organizing map
[ { "docid": "a42a19df66ab8827bfcf4c4ee709504d", "text": "We describe the numerical methods required in our approach to multi-dimensional scaling. The rationale of this approach has appeared previously. 1. Introduction We describe a numerical method for multidimensional scaling. In a companion paper [7] we describe the rationale for our approach to scaling, which is related to that of Shepard [9]. As the numerical methods required are largely unfamiliar to psychologists, and even have elements of novelty within the field of numerical analysis, it seems worthwhile to describe them. In [7] we suppose that there are n objects 1, · · · , n, and that we have experimental values 8;; of dissimilarity between them. For a configuration of points x1 , • • • , x .. in t:-dimensional space, with interpoint distances d;; , we defined the stress of the configuration by The stress is intendoo to be a measure of how well the configuration matches the data. More fully, it is supposed that the \"true\" dissimilarities result from some unknown monotone distortion of the interpoint distances of some \"true\" configuration, and that the observed dissimilarities differ from the true dissimilarities only because of random fluctuation. The stress is essentially the root-mean-square residual departure from this hypothesis. By definition, the best-fitting configuration in t-dimensional space, for a fixed value of t, is that configuration which minimizes the stress. The primary computational problem is to find that configuration. A secondary computational problem, of independent interest, is to find the values of", "title": "" } ]
[ { "docid": "242686291812095c5320c1c8cae6da27", "text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.", "title": "" }, { "docid": "67825e84cb2e636deead618a0868fa4a", "text": "Image compression is used specially for the compression of images where tolerable degradation is required. With the wide use of computers and consequently need for large scale storage and transmission of data, efficient ways of storing of data have become necessary. With the growth of technology and entrance into the Digital Age, the world has found itself amid a vast amount of information. Dealing with such enormous information can often present difficulties. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.JPEG and JPEG 2000 are two important techniques used for image compression. In this paper, we discuss about lossy image compression techniques and reviews of different basic lossy image compression methods are considered. The methods such as JPEG and JPEG2000 are considered. A conclusion is derived on the basis of these methods Keywords— Data compression, Lossy image compression, JPEG, JPEG2000, DCT, DWT", "title": "" }, { "docid": "7b99f2b0c903797c5ed33496f69481fc", "text": "Dance imagery is a consciously created mental representation of an experience, either real or imaginary, that may affect the dancer and her or his movement. In this study, imagery research in dance was reviewed in order to: 1. describe the themes and ideas that the current literature has attempted to illuminate and 2. discover the extent to which this literature fits the Revised Applied Model of Deliberate Imagery Use. A systematic search was performed, and 43 articles from 24 journals were found to fit the inclusion criteria. The articles were reviewed, analyzed, and categorized. The findings from the articles were then reported using the Revised Applied Model as a framework. Detailed descriptions of Who, What, When and Where, Why, How, and Imagery Ability were provided, along with comparisons to the field of sports imagery. Limitations within the field, such as the use of non-dance-specific and study-specific measurements, make comparisons and clear conclusions difficult to formulate. Future research can address these problems through the creation of dance-specific measurements, higher participant rates, and consistent methodologies between studies.", "title": "" }, { "docid": "7d117525263c970c7c23f2a8ba0357d6", "text": "Entity search is an emerging IR and NLP task that involves the retrieval of entities of a specific type in response to a query. We address the similar researcher search\" or the \"researcher recommendation\" problem, an instance of similar entity search\" for the academic domain. In response to a researcher name' query, the goal of a researcher recommender system is to output the list of researchers that have similar expertise as that of the queried researcher. We propose models for computing similarity between researchers based on expertise profiles extracted from their publications and academic homepages. We provide results of our models for the recommendation task on two publicly-available datasets. To the best of our knowledge, we are the first to address content-based researcher recommendation in an academic setting and demonstrate it for Computer Science via our system, ScholarSearch.", "title": "" }, { "docid": "5b32a82676846632b0f4d1bf0941156c", "text": "In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling.", "title": "" }, { "docid": "3cc49362a90d5039a80f4a030869cf2d", "text": "Walking is only possible within immersive virtual environments that fit inside the boundaries of the user's physical workspace. To reduce the severity of the restrictions imposed by limited physical area, we introduce \"impossible spaces,\" a new design mechanic for virtual environments that wish to maximize the size of the virtual environment that can be explored with natural locomotion. Such environments make use of self-overlapping architectural layouts, effectively compressing comparatively large interior environments into smaller physical areas. We conducted two formal user studies to explore the perception and experience of impossible spaces. In the first experiment, we showed that reasonably small virtual rooms may overlap by as much as 56% before users begin to detect that they are in an impossible space, and that the larger virtual rooms that expanded to maximally fill our available 9.14m × 9.14m workspace may overlap by up to 31%. Our results also demonstrate that users perceive distances to objects in adjacent overlapping rooms as if the overall space was uncompressed, even at overlap levels that were overtly noticeable. In our second experiment, we combined several well-known redirection techniques to string together a chain of impossible spaces in an expansive outdoor scene. We then conducted an exploratory analysis of users' verbal feedback during exploration, which indicated that impossible spaces provide an even more powerful illusion when users are naive to the manipulation.", "title": "" }, { "docid": "01129fee4ee2553315b0c49b477bc352", "text": "An increasingly important challenge in network analysis is efficient detection and tracking of communities in dynamic networks for which changes arrive as a stream. There is a need for algorithms that can incrementally update and monitor communities whose evolution generates huge real-time data streams, such as the Internet or on-line social networks. In this paper, we propose LabelRankT, an on-line distributed algorithm for detection of communities in large-scale dynamic networks through stabilized label propagation. Results of tests on real-world networks demonstrate that LabelRankT has much lower computational costs than other algorithms. It also improves the quality of the detected communities compared to dynamic detection methods and matches the quality achieved by static detection approaches. Unlike most of other algorithms which apply only to binary networks, LabelRankT works on weighted and directed networks, which provides a flexible and promising solution for real-world applications.", "title": "" }, { "docid": "ef66164de5c5a853d47f33be842806ba", "text": "Raw optical motion capture data often includes errors such as occluded markers, mislabeled markers, and high frequency noise or jitter. Typically these errors must be fixed by hand - an extremely time-consuming and tedious task. Due to this, there is a large demand for tools or techniques which can alleviate this burden. In this research we present a tool that sidesteps this problem, and produces joint transforms directly from raw marker data (a task commonly called \"solving\") in a way that is extremely robust to errors in the input data using the machine learning technique of denoising. Starting with a set of marker configurations, and a large database of skeletal motion data such as the CMU motion capture database [CMU 2013b], we synthetically reconstruct marker locations using linear blend skinning and apply a unique noise function for corrupting this marker data - randomly removing and shifting markers to dynamically produce billions of examples of poses with errors similar to those found in real motion capture data. We then train a deep denoising feed-forward neural network to learn a mapping from this corrupted marker data to the corresponding transforms of the joints. Once trained, our neural network can be used as a replacement for the solving part of the motion capture pipeline, and, as it is very robust to errors, it completely removes the need for any manual clean-up of data. Our system is accurate enough to be used in production, generally achieving precision to within a few millimeters, while additionally being extremely fast to compute with low memory requirements.", "title": "" }, { "docid": "cb4966a838bbefccbb1b74e5f541ce76", "text": "Theories of human behavior are an important but largely untapped resource for software engineering research. They facilitate understanding of human developers’ needs and activities, and thus can serve as a valuable resource to researchers designing software engineering tools. Furthermore, theories abstract beyond specific methods and tools to fundamental principles that can be applied to new situations. Toward filling this gap, we investigate the applicability and utility of Information Foraging Theory (IFT) for understanding information-intensive software engineering tasks, drawing upon literature in three areas: debugging, refactoring, and reuse. In particular, we focus on software engineering tools that aim to support information-intensive activities, that is, activities in which developers spend time seeking information. Regarding applicability, we consider whether and how the mathematical equations within IFT can be used to explain why certain existing tools have proven empirically successful at helping software engineers. Regarding utility, we applied an IFT perspective to identify recurring design patterns in these successful tools, and consider what opportunities for future research are revealed by our IFT perspective.", "title": "" }, { "docid": "8067f318656078b44993a67f5ac1c274", "text": "The security of most existing cryptocurrencies is based on a concept called Proof-of-Work, in which users must solve a computationally hard cryptopuzzle to authorize transactions (“one unit of computation, one vote”). This leads to enormous expenditure on hardware and electricity in order to collect the rewards associated with transaction authorization. Proof-of-Stake is an alternative concept that instead selects users to authorize transactions proportional to their wealth (“one coin, one vote”). Some aspects of the two paradigms are the same. For instance, obtaining voting power in Proof-of-Stake has a monetary cost just as in Proof-of-Work: a coin cannot be freely duplicated any more easily than a unit of computation. However some aspects are fundamentally different. In particular, exactly because Proof-of-Stake is wasteless, there is no inherent resource cost to deviating (commonly referred to as the “Nothing-at-Stake” problem). In contrast to prior work, we focus on incentive-driven deviations (any participant will deviate if doing so yields higher revenue) instead of adversarial corruption (an adversary may take over a significant fraction of the network, but the remaining players follow the protocol). The main results of this paper are several formal barriers to designing incentive-compatible proof-of-stake cryptocurrencies (that don’t apply to proof-of-work). ∗Computer Science Department, UC Berkeley. Email:jonahbc@eecs.berkeley.edu †Computer Science Department, Princeton University. Email:arvindn@cs.princeton.edu ‡Computer Science Department, Carnegie Mellon University. Email:cpsomas@cs.cmu.edu §Computer Science Department, Princeton University. Email:smweinberg@princeton.edu. Supported by NSF CCF-1717899. Work done in part while the author was a Research Fellow at the Simons Institute for the Theory of Computing. 0 ar X iv :1 80 9. 06 52 8v 1 [ cs .G T ] 1 8 Se p 20 18", "title": "" }, { "docid": "5625166c3e84059dd7b41d3c0e37e080", "text": "External border surveillance is critical to the security of every state and the challenges it poses are changing and likely to intensify. Wireless sensor networks (WSN) are a low cost technology that provide an intelligence-led solution to effective continuous monitoring of large, busy, and complex landscapes. The linear network topology resulting from the structure of the monitored area raises challenges that have not been adequately addressed in the literature to date. In this paper, we identify an appropriate metric to measure the quality of WSN border crossing detection. Furthermore, we propose a method to calculate the required number of sensor nodes to deploy in order to achieve a specified level of coverage according to the chosen metric in a given belt region, while maintaining radio connectivity within the network. Then, we contribute a novel cross layer routing protocol, called levels division graph (LDG), designed specifically to address the communication needs and link reliability for topologically linear WSN applications. The performance of the proposed protocol is extensively evaluated in simulations using realistic conditions and parameters. LDG simulation results show significant performance gains when compared with its best rival in the literature, dynamic source routing (DSR). Compared with DSR, LDG improves the average end-to-end delays by up to 95%, packet delivery ratio by up to 20%, and throughput by up to 60%, while maintaining comparable performance in terms of normalized routing load and energy consumption.", "title": "" }, { "docid": "8758425824753fea372eeeeb18ee5856", "text": "By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization problems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark function results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more calls of fitness functions and fewer individuals.", "title": "" }, { "docid": "8c381b81b193032633e2fa836f0d7e23", "text": "This study presents a modified flying capacitor three-level buck dc-dc converter with improved dynamic response. First, the limitations in the transient response improvement of the conventional and three-level buck converters are discussed. Then, the three-level buck converter is modified in a way that it would benefit from a faster dynamic during sudden changes in the load. Finally, a controller is proposed that detects load transients and responds appropriately. In order to verify the effectiveness of the modified topology and the proposed transient controller, a simulation model and a hardware prototype are developed. Analytical, simulation, and experimental results show a significant dynamic response improvement.", "title": "" }, { "docid": "a58ede53f0f2452e60528d5a470c0d7e", "text": "Background. Controversies still prevail as to how exactly epigastric hernia occurs. Both the vascular lacunae hypothesis and the tendinous fibre decussation hypothesis have proved to be widely accepted as possible explanations for the etiology. Patient. We present a patient who suffered from early-onset epigastric hernia. Conclusions. We believe the identification of the ligamentum teres and its accompanying vessel at its fascial defect supports the vascular lacunae hypothesis. However, to further our understanding, biopsy of the linea alba in patients with epigastric hernias is indicated.", "title": "" }, { "docid": "1b394e01c8e2ea7957c62e3e0b15fbd7", "text": "In this paper, we present results on the implementation of a hierarchical quaternion based attitude and trajectory controller for manual and autonomous flights of quadrotors. Unlike previous papers on using quaternion representation, we use the nonlinear complementary filter that estimates the attitude in quaternions and as such does not involve Euler angles or rotation matrices. We show that for precise trajectory tracking, the resulting attitude error dynamics of the system is non-autonomous and is almost globally asymptotically and locally exponentially stable under the proposed control law. We also show local exponential stability of the translational dynamics under the proposed trajectory tracking controller which sits at the highest level of the hierarchy. Thus by input-to-state stability, the entire system is locally exponentially stable. The quaternion based observer and controllers are available as open-source.", "title": "" }, { "docid": "c517788095af71fcd1b5b02843d5f9f3", "text": "MOTIVATION\nWith the increasing availability of large protein-protein interaction networks, the question of protein network alignment is becoming central to systems biology. Network alignment is further delineated into two sub-problems: local alignment, to find small conserved motifs across networks, and global alignment, which attempts to find a best mapping between all nodes of the two networks. In this article, our aim is to improve upon existing global alignment results. Better network alignment will enable, among other things, more accurate identification of functional orthologs across species.\n\n\nRESULTS\nWe introduce IsoRankN (IsoRank-Nibble) a global multiple-network alignment tool based on spectral clustering on the induced graph of pairwise alignment scores. IsoRankN outperforms existing algorithms for global network alignment in coverage and consistency on multiple alignments of the five available eukaryotic networks. Being based on spectral methods, IsoRankN is both error tolerant and computationally efficient.\n\n\nAVAILABILITY\nOur software is available freely for non-commercial purposes on request from: http://isorank.csail.mit.edu/.", "title": "" }, { "docid": "40e0b3cfe54b69dce5977f6bc22c2bd6", "text": "This paper links the direct-sequence code-division multiple access (DS-CDMA) multiuser separation-equalization-detection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics. Exploiting this link, it derives a deterministic blind PARAFAC DS-CDMA receiver with performance close to nonblind minimum mean-squared error (MMSE). The proposed PARAFAC receiver capitalizes on code, spatial, and temporal diversity-combining, thereby supporting small sample sizes, more users than sensors, and/or less spreading than users. Interestingly, PARAFAC does not require knowledge of spreading codes, the specifics of multipath (interchip interference), DOA-calibration information, finite alphabet/constant modulus, or statistical independence/whiteness to recover the information-bearing signals. Instead, PARAFAC relies on a fundamental result regarding the uniqueness of low-rank three-way array decomposition due to Kruskal (and generalized herein to the complex-valued case) that guaranteesidentifiability of all relevant signals and propagation parameters. These and other issues are also demonstrated in pertinent simulation experiments.", "title": "" }, { "docid": "577e7903eb355cbf790fb1c159a08e49", "text": "We present several new algorithms for multiagent reinforcement learning. A common feature of these algorithms i a parameterized, structured representation of a policy or value function. This structure is leveraged in an approach we call coordinated reinforcement learning, by which agents coordinate both their action selection activities and their parameter updates. Within the limits of our parametric representations, the agents will determine a jointly optimal action without explicitly considering every possible action in their exponentially large joint action space. Our methods iffer from many previous reinforcement learning approaches to multiagent coordination in that structured communication and coordination between agents appears at the core of both the learning algorithm and the execution architecture.", "title": "" }, { "docid": "bb6dfed56811136cb3efbb5e3939a386", "text": "Advancements in IC manufacturing technologies allow for building very large devices with billions of transistors and with complex interactions between them encapsulated in a huge number of design rules. To ease designers' efforts in dealing with electrical and manufacturing problems, regular layout style seems to be a viable option. In this paper we analyze regular layouts in an IC manufacturability context and define their desired properties. We introduce the OPC-free IC design methodology and study properties of cells designed for this layout style that have various degrees of regularity.", "title": "" } ]
scidocsrr
7c72b94fe212d4eb826ecc3c21f449f5
Proposal for a Conceptual Framework for Educators to Describe and Design MOOCs
[ { "docid": "86ecf68fcd67913086df2122ad99c763", "text": "Behaviorism, cognitivism, and constructivism are the three broad learning theories most often utilized in the creation of instructional environments. These theories, however, were developed in a time when learning was not impacted through technology. Over the last twenty years, technology has reorganized how we live, how we communicate, and how we learn. Learning needs and theories that describe learning principles and processes, should be reflective of underlying social environments. Vaill emphasizes that “learning must be a way of being – an ongoing set of attitudes and actions by individuals and groups that they employ to try to keep abreast o the surprising, novel, messy, obtrusive, recurring events...” (1996, p.42).", "title": "" } ]
[ { "docid": "9d0f169c3891401c787a83ebb8e3f6be", "text": "BACKGROUND\nBrooke-Spiegler syndrome is a hereditary tumor predisposition disorder characterized by the development of cylindromas, trichoepitheliomas, and spiradenomas. Predilection sites of the disease are hair follicles and sweat glands of the head and neck. In some patients, the tumors can coalesce to so-called turban tumors, which then usually cause cosmetic, psychological, and functional impairment. A curative therapy is not yet available, and thus total scalp excision followed by split skin graft is evolving as a frequently applied therapy. However, this treatment can lead to the formation of a thin and vulnerable skin, which hampers wearing a wig. Therefore, a more robust and functional solution is preferable. Here, we report on a woman with a turban tumor who suffered enormously from the disease and had secluded herself from social life.\n\n\nMETHODS\nWe treated her with a total scalp excision down to the periosteum, followed by sequential combined reconstruction with an artificial dermal template and split skin grafts.\n\n\nRESULTS\nThe treatment resulted in formation of a robust and flexible skin.\n\n\nCONCLUSION\nTreatment of turban tumor is a challenge considering the localization and extensiveness of the tumor masses. This novel therapy for turban tumor leads to a very good cosmetic and functional outcome.", "title": "" }, { "docid": "84a35fa958fae192b9d97cbb165cdebe", "text": "COCO is a platform for Comparing Continuous Optimizers in a black-box setting. It aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. We present the rationals behind the development of the platform as a general proposition for a guideline towards better benchmarking. We detail underlying fundamental concepts of COCO such as the definition of a problem, the idea of instances, the relevance of target values, and runtime as central performance measure. Finally, we give a quick overview of the basic code structure and the currently available test suites.", "title": "" }, { "docid": "ac979967ab992da6115852e00e4769f2", "text": "Experiments were carried out to study the effect of high dose of “tulsi” (Ocimum sanctum Linn.) pellets on testis and epididymis in male albino rat. Wheat flour, oil and honey pellets of tulsi leaves were fed to albino rat, at 400mg/ 100g body weight per day, along with normal diet, for a period of 72 days. One group of tulsi-fed rats was left for recovery, after the last dose fed on day 72, up to day 120. This high dose of tulsi was found to cause durationdependant decrease of testis weight and derangements in the histo-architecture of testis as well as epididymis. The diameter of seminiferous tubules decreased considerably, with corresponding increase in the interstitium. Spermatogenesis was arrested, accompanied by degeneration of seminiferous epithelial elements. Epididymal tubules regressed, and the luminal spermatozoa formed a coagulum. In the recovery group, testis and epididymis regained normal weights, where as spermatogenesis was partially restored. Thus, high dose of tulsi leaf affects testicular and epididymyal structure and function reversibly.", "title": "" }, { "docid": "cb85f458a4e6fec7d6b16c8a046bb692", "text": "Rational use of water can be a powerful tool to promote sustainability on university campuses. Other than resource and financial savings, it aims to support technological and behavior innovation towards a more balanced relationship between human activities and nature. This work reports on a water saving program case study, led by a research group at a university in the northeast of Brazil. It describes and discusses methods used and results obtained. From 1999 to 2008 the program reduced per capita water use by half at the university. It has brought significant resource savings to the institution. Internal results foster the implementation of cooperative projects between the university and public and private partners. All these projects involve engineers, social workers and undergraduate students from different courses. However, internal and external results have been insufficient to guarantee the internalization of the program in routine activities of the university. The permanence of the program still depends on the research group that created and manages it. The paper also presents the difficulties faced in sustaining a program like this at a Brazilian university and discusses future action to be taken to achieve the pro-", "title": "" }, { "docid": "ba6fe1b26d76d7ff3e84ddf3ca5d3e35", "text": "The spacing effect describes the robust finding that long-term learning is promoted when learning events are spaced out in time rather than presented in immediate succession. Studies of the spacing effect have focused on memory processes rather than for other types of learning, such as the acquisition and generalization of new concepts. In this study, early elementary school children (5- to 7-year-olds; N = 36) were presented with science lessons on 1 of 3 schedules: massed, clumped, and spaced. The results revealed that spacing lessons out in time resulted in higher generalization performance for both simple and complex concepts. Spaced learning schedules promote several types of learning, strengthening the implications of the spacing effect for educational practices and curriculum.", "title": "" }, { "docid": "217b7d425d280a1ebb55862cc9bfd848", "text": "The present study is focused on a review of the current state of investigating music-evoked emotions experimentally, theoretically and with respect to their therapeutic potentials. After a concise historical overview and a schematic of the hearing mechanisms, experimental studies on music listeners and on music performers are discussed, starting with the presentation of characteristic musical stimuli and the basic features of tomographic imaging of emotional activation in the brain, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), which offer high spatial resolution in the millimeter range. The progress in correlating activation imaging in the brain to the psychological understanding of music-evoked emotion is demonstrated and some prospects for future research are outlined. Research in psychoneuroendocrinology and molecular markers is reviewed in the context of music-evoked emotions and the results indicate that the research in this area should be intensified. An assessment of studies involving measuring techniques with high temporal resolution down to the 10 ms range, as, e.g., electroencephalography (EEG), event-related brain potentials (ERP), magnetoencephalography (MEG), skin conductance response (SCR), finger temperature, and goose bump development (piloerection) can yield information on the dynamics and kinetics of emotion. Genetic investigations reviewed suggest the heredity transmission of a predilection for music. Theoretical approaches to musical emotion are directed to a unified model for experimental neurological evidence and aesthetic judgment. Finally, the reports on musical therapy are briefly outlined. The study concludes with an outlook on emerging technologies and future research fields.", "title": "" }, { "docid": "7b83005861e8c0cfe7a13736e9a75ab6", "text": "This thesis presents a study into the nature and structure of academic lectures, with a special focus on metadiscourse phenomena. Metadiscourse refers to a set of linguistics expressions that signal specific discourse functions such as the Introduction: “Today we will talk about...” and Emphasising: “This is an important point”. These functions are important because they are part of lecturers’ strategies in understanding of what happens in a lecture. The knowledge of their presence and identity could serve as initial steps toward downstream applications that will require functional analysis of lecture content such as a browser for lectures archives, summarisation, or an automatic minute-taker for lectures. One challenging aspect for metadiscourse detection and classification is that the set of expressions are semifixed, meaning that different phrases can indicate the same function. To that end a four-stage approach is developed to study metadiscourse in academic lectures. Firstly, a corpus of metadiscourse for academic lectures from Physics and Economics courses is built by adapting an existing scheme that describes functional-oriented metadiscourse categories. Second, because producing reference transcripts is a time-consuming task and prone to some errors due to the manual efforts required, an automatic speech recognition (ASR) system is built specifically to produce transcripts of lectures. Since the reference transcripts lack time-stamp information, an alignment system is applied to the reference to be able to evaluate the ASR system. Then, a model is developed using Support Vector Machines (SVMs) to classify metadiscourse tags using both textual and acoustical features. The results show that n-grams are the most inductive features for the task; however, due to data sparsity the model does not generalise for unseen n-grams. This limits its ability to solve the variation issue in metadiscourse expressions. Continuous Bag-of-Words (CBOW) provide a promising solution as this can capture both the syntactic and semantic similarities between words and thus is able to solve the generalisation issue. However, CBOW ignores the word order completely, something which is very important to be retained when classifying metadiscourse tags. The final stage aims to address the issue of sequence modelling by developing a joint CBOW and Convolutional Neural Network (CNN) model. CNNs can work with continuous features such as word embedding in an elegant and robust fashion by producing a fixedsize feature vector that is able to identify indicative local information for the tagging task. The results show that metadiscourse tagging using CNNs outperforms the SVMs model significantly even on ASR outputs, owing to its ability to predict a sequence of words that is more representative for the task regardless of its position in the sentence. In addition, the inclusion of other features such as part-of-speech (POS) tags and prosodic cues improved the results further. These findings are consistent in both disciplines. The final contribution in this thesis is to investigate the suitability of using metadiscourse tags as discourse features in the lecture structure segmentation model, despite the fact that the task is approached as a classification model and most of the state-of-art models are unsupervised. In general, the obtained results show remarkable improvements over the state-of-the-art models in both disciplines.", "title": "" }, { "docid": "462813402246b53bb4af46ca3b407195", "text": "We present the performance of a patient with acquired dysgraphia, DS, who has intact oral spelling (100% correct) but severely impaired written spelling (7% correct). Her errors consisted entirely of well-formed letter substitutions. This striking dissociation is further characterized by consistent preservation of orthographic, as opposed to phonological, length in her written output. This pattern of performance indicates that DS has intact graphemic representations, and that her errors are due to a deficit in letter shape assignment. We further interpret the occurrence of a small percentage of lexical errors in her written responses and a significant effect of letter frequencies and transitional probabilities on the pattern of letter substitutions as the result of a repair mechanism that locally constrains DS' written output.", "title": "" }, { "docid": "2eba092d19cc8fb35994e045f826e950", "text": "Deep neural networks have proven to be particularly e‚ective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their e‚ectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. Œis article represents the €rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the €eld.", "title": "" }, { "docid": "599d814fd3b3a758f3b2459b74aeb92c", "text": "Relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text. We propose a novel convolutional neural network architecture for this task, relying on two levels of attention in order to better discern patterns in heterogeneous contexts. This architecture enables endto-end learning from task-specific labeled data, forgoing the need for external knowledge such as explicit dependency structures. Experiments show that our model outperforms previous state-of-the-art methods, including those relying on much richer forms of prior knowledge.", "title": "" }, { "docid": "ab4a788fd82d5953e22032b1361328c2", "text": "To recognize application of Artificial Neural Networks (ANNs) in weather forecasting, especially in rainfall forecasting a comprehensive literature review from 1923 to 2012 is done and presented in this paper. And it is found that architectures of ANN such as BPN, RBFN is best established to be forecast chaotic behavior and have efficient enough to forecast monsoon rainfall as well as other weather parameter prediction phenomenon over the smaller geographical region.", "title": "" }, { "docid": "3b300b9275b6da1aff685e5ca9b71252", "text": "This paper presents an algorithm developed based on hidden Markov model for cues fusion and event inference in soccer video. Four events, shoot, foul, offside and normal playing, are defined to be detected. The states of the events are employed to model the observations of the five cues, which are extracted from the shot sequences directly. The experimental results show the algorithm is effective and robust in inferring events from roughly extracted cues.", "title": "" }, { "docid": "7b2d1af8db446019ba45511098dddefe", "text": "This article proposes a novel online portfolio selection strategy named “Passive Aggressive Mean Reversion” (PAMR). Unlike traditional trend following approaches, the proposed approach relies upon the mean reversion relation of financial markets. Equipped with online passive aggressive learning technique from machine learning, the proposed portfolio selection strategy can effectively exploit the mean reversion property of markets. By analyzing PAMR’s update scheme, we find that it nicely trades off between portfolio return and volatility risk and reflects the mean reversion trading principle. We also present several variants of PAMR algorithm, including a mixture algorithm which mixes PAMR and other strategies. We conduct extensive numerical experiments to evaluate the empirical performance of the proposed algorithms on various real datasets. The encouraging results show that in most cases the proposed PAMR strategy outperforms all benchmarks and almost all state-of-the-art portfolio selection strategies under various performance metrics. In addition to its superior performance, the proposed PAMR runs extremely fast and thus is very suitable for real-life online trading applications. The experimental testbed including source codes and data sets is available at  http://www.cais.ntu.edu.sg/~chhoi/PAMR/ .", "title": "" }, { "docid": "64a634a76a39fbc1930a7ca66e21e125", "text": "This paper presents a broadband cascode SiGe power amplifier (PA) in the polar transmitter (TX) system using the envelope-tacking (ET) technique. The cascode PA achieves the power-added efficiency (PAE) of >30% across the frequency range of 0.6∼2.4 GHz in continuous wave (CW) mode. The ET-based polar TX system using this cascode PA is evaluated and compared with the conventional stand-alone cascode PA. The experimental data shows that the cascode PA is successfully linearized by the ET scheme, passing the stringent WiMAX spectral mask and the required error vector magnitude (EVM). The entire polar TX system reaches the PAE of 30%/36% at the average output power of 18/17 dBm at 2.3/0.7 GHz for WiMAX 16QAM 3.5 MHz signals. These measurement results suggest that our saturated cascode SiGe PA can be attractive for dual-mode WiMAX applications.", "title": "" }, { "docid": "38f289b085f2c6e2d010005f096d8fd7", "text": "We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Model variants allow for trade-offs between accuracy and compute resources. We report the relationship between model complexity, resources, and transfer performance. Comparisons are made with baselines without transfer learning and to baselines that incorporate word-level transfer. Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer. We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias.", "title": "" }, { "docid": "38d04471b8166ef7a0955881db67f494", "text": "Changes in educational thinking and in medical program accreditation provide an opportunity to reconsider approaches to undergraduate medical education. Current developments in competency-based medical education (CBME), in particular, present both possibilities and challenges for undergraduate programs. CBME does not specify particular learning strategies or formats, but rather provides a clear description of intended outcomes. This approach has the potential to yield authentic curricula for medical practice and to provide a seamless linkage between all stages of lifelong learning. At the same time, the implementation of CBME in undergraduate education poses challenges for curriculum design, student assessment practices, teacher preparation, and systemic institutional change, all of which have implications for student learning. Some of the challenges of CBME are similar to those that can arise in the implementation of any integrated program, while others are specific to the adoption of outcome frameworks as an organizing principle for curriculum design. This article reviews a number of issues raised by CBME in the context of undergraduate programs and provides examples of best practices that might help to address these issues.", "title": "" }, { "docid": "c1fdd4c47ecaa1cbe2d0d684a58ab01c", "text": "Sentiment analysis is an important but challenging task. Remarkable success has been achieved on domains where sufficient labeled training data is available. Nevertheless, annotating sufficient data is labor-intensive and time-consuming, establishing significant barriers for adapting the sentiment classification systems to new domains. In this paper, we introduce a Capsule network for sentiment analysis in domain adaptation scenario with semantic rules (CapsuleDAR). CapsuleDAR exploits capsule network to encode the intrinsic spatial part-whole relationship constituting domain invariant knowledge that bridges the knowledge gap between the source and target domains. Furthermore, we also propose a rule network to incorporate the semantic rules into the capsule network to enhance the comprehensive sentence representation learning. Extensive experiments are conducted to evaluate the effectiveness of the proposed CapsuleDAR model on a real world data set of four domains. Experimental results demonstrate that CapsuleDAR achieves substantially better performance than the strong competitors for the cross-domain sentiment classification task.", "title": "" }, { "docid": "43e831b69559ae228bae68b369dac2e3", "text": "Virtualization technology enables Cloud providers to efficiently use their computing services and resources. Even if the benefits in terms of performance, maintenance, and cost are evident, however, virtualization has also been exploited by attackers to devise new ways to compromise a system. To address these problems, research security solutions have evolved considerably over the years to cope with new attacks and threat models. In this work, we review the protection strategies proposed in the literature and show how some of the solutions have been invalidated by new attacks, or threat models, that were previously not considered. The goal is to show the evolution of the threats, and of the related security and trust assumptions, in virtualized systems that have given rise to complex threat models and the corresponding sophistication of protection strategies to deal with such attacks. We also categorize threat models, security and trust assumptions, and attacks against a virtualized system at the different layers—in particular, hardware, virtualization, OS, and application.", "title": "" }, { "docid": "6a2b9761b745f4ece1bba3fab9f5d8b1", "text": "Driven by the evolution of consumer-to-consumer (C2C) online marketplaces, we examine the role of communication tools (i.e., an instant messenger, internal message box and a feedback system), in facilitating dyadic online transactions in the Chinese C2C marketplace. Integrating the Chinese concept of guanxi with theories of social translucence and social presence, we introduce a structural model that explains how rich communication tools influence a website’s interactivity and presence, subsequently building trust and guanxi among buyers and sellers, and ultimately predicting buyers’ repurchase intentions. The data collected from 185 buyers in TaoBao, China’s leading C2C online marketplace, strongly support the proposed model. We believe that this research is the first formal study to show evidence of guanxi in online C2C marketplaces, and it is attributed to the role of communication tools to enhance a website’s interactivity and presence.", "title": "" }, { "docid": "5206c3a376b76ff75f978cf11969e919", "text": "When performing large-scale perpetual localization and mapping one faces problems like memory consumption or repetitive and dynamic scene elements requiring robust data association. We propose a visual SLAM method which handles short- and long-term scene dynamics in large environments using a single camera only. Through visibility-dependent map filtering and efficient keyframe organization we reach a considerable performance gain only through incorporation of a slightly more complex map representation. Experiments on a large, mixed indoor/outdoor dataset over a time period of two weeks demonstrate the scalability and robustness of our approach.", "title": "" } ]
scidocsrr
a04a85f521f200520347986b23469af5
Efficient BVH-based Collision Detection Scheme with Ordering and Restructuring
[ { "docid": "1b82ef890fbbf033781ea65202b2f4b9", "text": "We present a fast GPU-based streaming algorithm to perform collision queries between deformable models. Our approach is based on hierarchical culling and reduces the computation to generating different streams. We present a novel stream registration method to compact the streams and efficiently compute the potentially colliding pairs of primitives. We also use a deferred front tracking method to lower the memory overhead. The overall algorithm has been implemented on different GPUs and we have evaluated its performance on non-rigid and deformable simulations. We highlight our speedups over prior CPU-based and GPU-based algorithms. In practice, our algorithm can perform inter-object and intra-object computations on models composed of hundreds of thousands of triangles in tens of milliseconds.", "title": "" } ]
[ { "docid": "d7310e830f85541aa1d4b94606c1be0c", "text": "We present a practical framework to automatically detect shadows in real world scenes from a single photograph. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The 7-layer network architecture of each ConvNet consists of alternating convolution and sub-sampling layers. The proposed framework learns features at the super-pixel level and along the object boundaries. In both cases, features are extracted using a context aware window centered at interest points. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow contours. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.", "title": "" }, { "docid": "39c5f28d8385dde119ae2cf4807fa98a", "text": "Random mutation is pervasive in nature and has been mathematically modeled extensively. It is a primary mechanism by which cancer and pathogens resist drugs and other systemic treatments. For example, most cancers are still incurable primarily because they develop resistance to anti-cancer drugs. This resistance arises via a variety of mechanisms [1-4], and mathematical modeling over the past four decades has improved our understanding of it [517]. It is now recognized that tumors are heterogeneous, resulting from random mutation of cancer-cell DNA, both before and after treatment begins, and this is an important, if not the primary, source of resistance-generating mutations in cancer [18-25]. Some mathematical models of resistance have relied on this assumption using an analytic deterministic approach and have led to elegant insights [5,9,10] when applied to limited, simple cases. Others have employed more complex mathematical stochastic machinery to produce analytical solutions to their equations [8,11-16] leading to general treatment suggestions, but these have generally addressed somewhat idealized situations and required significant computation, limiting their usefulness in the clinical setting.", "title": "" }, { "docid": "8d80bfe0015c6b867c5ad8311e45d3fa", "text": "OBJECTIVES\nIt has been argued that mixed methods research can be useful in nursing and health science because of the complexity of the phenomena studied. However, the integration of qualitative and quantitative approaches continues to be one of much debate and there is a need for a rigorous framework for designing and interpreting mixed methods research. This paper explores the analytical approaches (i.e. parallel, concurrent or sequential) used in mixed methods studies within healthcare and exemplifies the use of triangulation as a methodological metaphor for drawing inferences from qualitative and quantitative findings originating from such analyses.\n\n\nDESIGN\nThis review of the literature used systematic principles in searching CINAHL, Medline and PsycINFO for healthcare research studies which employed a mixed methods approach and were published in the English language between January 1999 and September 2009.\n\n\nRESULTS\nIn total, 168 studies were included in the results. Most studies originated in the United States of America (USA), the United Kingdom (UK) and Canada. The analytic approach most widely used was parallel data analysis. A number of studies used sequential data analysis; far fewer studies employed concurrent data analysis. Very few of these studies clearly articulated the purpose for using a mixed methods design. The use of the methodological metaphor of triangulation on convergent, complementary, and divergent results from mixed methods studies is exemplified and an example of developing theory from such data is provided.\n\n\nCONCLUSION\nA trend for conducting parallel data analysis on quantitative and qualitative data in mixed methods healthcare research has been identified in the studies included in this review. Using triangulation as a methodological metaphor can facilitate the integration of qualitative and quantitative findings, help researchers to clarify their theoretical propositions and the basis of their results. This can offer a better understanding of the links between theory and empirical findings, challenge theoretical assumptions and develop new theory.", "title": "" }, { "docid": "c116aab75223001bb4d216501b3c3b39", "text": "OBJECTIVE\nBurnout, a psychological consequence of prolonged work stress, has been shown to coexist with physical and mental disorders. The aim of this study was to investigate whether burnout is related to all-cause mortality among employees.\n\n\nMETHODS\nIn 1996, of 15,466 Finnish forest industry employees, 9705 participated in the 'Still Working' study and 8371 were subsequently identified from the National Population Register. Those who had been treated in a hospital for the most common causes of death prior to the assessment of burnout were excluded on the basis of the Hospital Discharge Register, resulting in a final study population of 7396 people. Burnout was measured using the Maslach Burnout Inventory-General Survey. Dates of death from 1996 to 2006 were extracted from the National Mortality Register. Mortality was predicted with Cox hazard regression models, controlling for baseline sociodemographic factors and register-based health status according to entitled medical reimbursement and prescribed medication for mental health problems, cardiac risk factors, and pain problems.\n\n\nRESULTS\nDuring the 10-year 10-month follow-up, a total of 199 employees had died. The risk of mortality per one-unit increase in burnout was 35% higher (95% CI 1.07-1.71) for total score and 26% higher (0.99-1.60) for exhaustion, 29% higher for cynicism (1.03-1.62), and 22% higher for diminished professional efficacy (0.96-1.55) in participants who had been under 45 at baseline. After adjustments, only the associations regarding burnout and exhaustion were statistically significant. Burnout was not related to mortality among the older employees.\n\n\nCONCLUSION\nBurnout, especially work-related exhaustion, may be a risk for overall survival.", "title": "" }, { "docid": "0ff7f69f341f62711b383699746452fd", "text": "Dynamic sensitivity control (DSC) is being discussed within the new IEEE 802.11ax task group as one of the potential techniques to improve the system performance for next generation Wi-Fi in high capacity and dense deployment environments, e.g. stadiums, conference venues, shopping malls, etc. However, there appears to be lack of consensus regarding the adoption of DSC within the group. This paper reports on investigations into the performance of the baseline DSC technique proposed in the IEEE 802.11ax task group under realistic scenarios defined by the task group. Simulations were carried out and the results suggest that compared with the default case (no DSC), the use of DSC may lead to mixed results in terms of throughput and fairness with the gain varying depending on factors like inter-AP distance, node distribution, node density and the DSC margin value. Further, we also highlight avenues for mitigating the shortcomings of DSC found in this study.", "title": "" }, { "docid": "e7a9584974596768d888d1d065135554", "text": "Footwear is an integral part of daily life. Embedding sensors and electronics in footwear for various different applications started more than two decades ago. This review article summarizes the developments in the field of footwear-based wearable sensors and systems. The electronics, sensing technologies, data transmission, and data processing methodologies of such wearable systems are all principally dependent on the target application. Hence, the article describes key application scenarios utilizing footwear-based systems with critical discussion on their merits. The reviewed application scenarios include gait monitoring, plantar pressure measurement, posture and activity classification, body weight and energy expenditure estimation, biofeedback, navigation, and fall risk applications. In addition, energy harvesting from the footwear is also considered for review. The article also attempts to shed light on some of the most recent developments in the field along with the future work required to advance the field.", "title": "" }, { "docid": "f44bfa0a366fb50a571e6df9f4c3f91d", "text": "BACKGROUND\nIn silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process.\n\n\nRESULTS\ncamb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2).\n\n\nCONCLUSIONS\nOverall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.", "title": "" }, { "docid": "9826dcd8970429b1f3398128eec4335b", "text": "This article provides an overview of recent contributions to the debate on the ethical use of previously collected biobank samples, as well as a country report about how this issue has been regulated in Spain by means of the new Biomedical Research Act, enacted in the summer of 2007. By contrasting the Spanish legal situation with the wider discourse of international bioethics, we identify and discuss a general trend moving from the traditional requirements of informed consent towards new models more favourable to research in a post-genomic context.", "title": "" }, { "docid": "c3261d1552912642d407b512d08cc6f7", "text": "Four studies apply self-determination theory (SDT; Ryan & Deci, 2000) in investigating motivation for computer game play, and the effects of game play on wellbeing. Studies 1–3 examine individuals playing 1, 2 and 4 games, respectively and show that perceived in-game autonomy and competence are associated with game enjoyment, preferences, and changes in well-being preto post-play. Competence and autonomy perceptions are also related to the intuitive nature of game controls, and the sense of presence or immersion in participants’ game play experiences. Study 4 surveys an on-line community with experience in multiplayer games. Results show that SDT’s theorized needs for autonomy, competence, and relatedness independently predict enjoyment and future game play. The SDT model is also compared with Yee’s (2005) motivation taxonomy of game play motivations. Results are discussed in terms of the relatively unexplored landscape of human motivation within virtual worlds.", "title": "" }, { "docid": "a0c15895a455c07b477d4486d32582ef", "text": "PURPOSE\nTo evaluate the efficacy of α-lipoic acid (ALA) in reducing scarring after trabeculectomy.\n\n\nMATERIALS AND METHODS\nEighteen adult New Zealand white rabbits underwent trabeculectomy. During trabeculectomy, thin sponges were placed between the sclera and Tenon's capsule for 3 minutes, saline solution, mitomycin-C (MMC) and ALA was applied to the control group (CG) (n=6 eyes), MMC group (MMCG) (n=6 eyes), and ALA group (ALAG) (n=6 eyes), respectively. After surgery, topical saline and ALA was applied for 28 days to the control and ALAGs, respectively. Filtrating bleb patency was evaluated by using 0.1% trepan blue. Hematoxylin and eosin and Masson trichrome staining for toxicity, total cellularity, and collagen organization; α-smooth muscle actin immunohistochemistry staining performed for myofibroblast phenotype identification.\n\n\nRESULTS\nClinical evaluation showed that all 6 blebs (100%) of the CG had failed, whereas there were only 2 failures (33%) in the ALAG and no failures in the MMCG on day 28. Histologic evaluation showed significantly lower inflammatory cell infiltration in the ALAGs and CGs than the MMCG. Toxicity change was more significant in the MMCG than the control and ALAGs. Collagen was better organized in the ALAG than control and MMCGs. In immunohistochemistry evaluation, ALA significantly reduced the population of cells expressing α-smooth muscle action.\n\n\nCONCLUSIONS\nΑLA prevents and/or reduces fibrosis by inhibition of inflammation pathways, revascularization, and accumulation of extracellular matrix. It can be used as an agent for delaying tissue regeneration and for providing a more functional-permanent fistula.", "title": "" }, { "docid": "533c441acd7a57c11bd1b12d847f6460", "text": "Recent Pwn2Own competitions have demonstrated the continued effectiveness of control hijacking attacks despite deployed countermeasures including stack canaries and ASLR. A powerful defense called Control flow Integrity (CFI) offers a principled approach to preventing such attacks. However, prior CFI implementations use static analysis and must limit protection to remain practical. These limitations have enabled attacks against all known CFI systems, as demonstrated in recent work. This paper presents a cryptographic approach to control flow integrity (CCFI) that is both fine-grain and practical: using message authentication codes (MAC) to protect control flow elements such as return addresses, function pointers, and vtable pointers. MACs on these elements prevent even powerful attackers with random read/ write access to memory from tampering with program control flow. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions. We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3–18% decrease in request rate.", "title": "" }, { "docid": "fa7645dd9623879d7442c944ca3fac3c", "text": "Human communication involves conveying messages both through verbal and non-verbal channels (facial expression, gestures, prosody, etc.). Nonetheless, the task of learning these patterns for a computer by combining cues from multiple modalities is challenging because it requires effective representation of the signals and also taking into consideration the complex interactions between them. From the machine learning perspective this presents a two-fold challenge: a) Modeling the intermodal variations and dependencies; b) Representing the data using an apt number of features, such that the necessary patterns are captured but at the same time allaying concerns such as over-fitting. In this work we attempt to address these aspects of multimodal recognition, in the context of recognizing two essential speaker traits, namely passion and credibility of online movie reviewers. We propose a novel ensemble classification approach that combines two different perspectives on classifying multimodal data. Each of these perspectives attempts to independently address the two-fold challenge. In the first, we combine the features from multiple modalities but assume inter-modality conditional independence. In the other one, we explicitly capture the correlation between the modalities but in a space of few dimensions and explore a novel clustering based kernel similarity approach for recognition. Additionally, this work investigates a recent technique for encoding text data that captures semantic similarity of verbal content and preserves word-ordering. The experimental results on a recent public dataset shows significant improvement of our approach over multiple baselines. Finally, we also analyze the most discriminative elements of a speaker's non-verbal behavior that contribute to his/her perceived credibility/passionateness.", "title": "" }, { "docid": "b16812b650017cf75797cb88949f4481", "text": "Wastage of electricity is one of the main problems which we are facing now a day. In our home, school, colleges or industry we see that lights are kept on even if there is nobody in the room or area. This happens due to negligence or because we forgot to turn the lights off or when we are in hurry. In this paper, an Energy Preserving System for Smart Rooms (EPSSR) is proposed to save energy in smart rooms. Using the ESP8266 chip which is a Wi-Fi chip with full TCP/IP stack and MCU capability we developed a lighting controls to reduce electrical usage. Based on the technology of the Internet of Things (IoT), a lot of solutions may be done to control smart rooms light without the need of accessing the electrical sockets or plug. Our research idea focuses on measuring the number of persons entering any room like seminar hall, conference room and classroom using pair of Infrared sensors and the chip. When a person enters the room, counter will be incremented with lightening the room and the light continue lightening while persons counter greater than zero. When a person leaves the room, the counter is decreased by one. If the persons counter reaches zero, the lights inside the room will be turned off using a relay interface. This paper provides a real energy preserving model that could be used in daily life.", "title": "" }, { "docid": "348a5c33bde53e7f9a1593404c6589b4", "text": "Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.", "title": "" }, { "docid": "ce7d204755e9faa7aa6273f3295bfbef", "text": "This paper describes our solution for the video recognition task of ActivityNet Kinetics challenge that ranked the 1st place. Most of existing state-of-the-art video recognition approaches are in favor of an end-to-end pipeline. One exception is the framework of DevNet [3]. The merit of DevNet is that they first use the video data to learn a network (i.e. fine-tuning or training from scratch). Instead of directly using the end-to-end classification scores (e.g. softmax scores), they extract the features from the learned network and then fed them into the off-the-shelf machine learning models to conduct video classification. However, the effectiveness of this line work has long-term been ignored and underestimated. In this submission, we extensively use this strategy. Particularly, we investigate four temporal modeling approaches using the learned features: Multi-group Shifting Attention Network, Temporal Xception Network, Multi-stream sequence Model and Fast-Forward Sequence Model. Experiment results on the challenging Kinetics dataset demonstrate that our proposed temporal modeling approaches can significantly improve existing approaches in the large-scale video recognition tasks. Most remarkably, our best single Multi-group Shifting Attention Network can achieve 77.7% in term of top-1 accuracy and 93.2% in term of top-5 accuracy on the validation set.", "title": "" }, { "docid": "15cb8a43e4b6b2f30218fe994d1db51e", "text": "In this paper, we present a home-monitoring oriented human activity recognition benchmark database, based on the combination of a color video camera and a depth sensor. Our contributions are two-fold: 1) We have created a publicly releasable human activity video database (i.e., named as RGBD-HuDaAct), which contains synchronized color-depth video streams, for the task of human daily activity recognition. This database aims at encouraging more research efforts on human activity recognition based on multi-modality sensor combination (e.g., color plus depth). 2) Two multi-modality fusion schemes, which naturally combine color and depth information, have been developed from two state-of-the-art feature representation methods for action recognition, i.e., spatio-temporal interest points (STIPs) and motion history images (MHIs). These depth-extended feature representation methods are evaluated comprehensively and superior recognition performances over their uni-modality (e.g., color only) counterparts are demonstrated.", "title": "" }, { "docid": "4493a071f0dbdf7464d7ad299fec97d3", "text": "Drawing upon self-determination theory, this study tested different types of behavioral regulation as parallel mediators of the association between the job’s motivating potential, autonomy-supportive leadership, and understanding the organization’s strategy, on the one hand, and job satisfaction, turnover intention, and two types of organizational citizenship behaviors (OCB), on the other hand. In particular, intrinsic motivation and identified regulation were contrasted as idiosyncratic motivational processes. Analyses were based on data from 201 employees in the Swiss insurance industry. Results supported both types of self-determined motivation as mediators of specific antecedent-outcome relationships. Identified regulation, for example, particularly mediated the impact of contextual antecedents on both civic virtue and altruism OCB. Overall, controlled types of behavioral regulation showed comparatively weak relations to antecedents or consequences. The unique characteristics of motivational processes and potential explanations for the weak associations of controlled motivation are discussed.", "title": "" }, { "docid": "ca90c81d258175d8574417940f8f04a7", "text": "Use of long sentence-like or phrase-like passwords such as \"abiggerbetterpassword\" and \"thecommunistfairy\" is increasing. In this paper, we study the role of grammatical structures underlying such passwords in diminishing the security of passwords. We show that the results of the study have direct bearing on the design of secure password policies, and on password crackers used for enforcing password security. Using an analytical model based on Parts-of-Speech tagging we show that the decrease in search space due to the presence of grammatical structures can be more than 50%. A significant result of our work is that the strength of long passwords does not increase uniformly with length. We show that using a better dictionary e.g. Google Web Corpus, we can crack more long passwords than previously shown (20.5% vs. 6%). We develop a proof-of-concept grammar-aware cracking algorithm to improve the cracking efficiency of long passwords. In a performance evaluation on a long password dataset, 10% of the total dataset was exclusively cracked by our algorithm and not by state-of-the-art password crackers.", "title": "" }, { "docid": "4e57b4a9bc4e2f4b869c2c111d223aea", "text": "Many reinforcement learning algorithms use trajectories collected from the execution of one or more policies to propose a new policy. Because execution of a bad policy can be costly or dangerous, techniques for evaluating the performance of the new policy without requiring its execution have been of recent interest in industry. Such off-policy evaluation methods, which estimate the performance of a policy using trajectories collected from the execution of other policies, heretofore have not provided confidences regarding the accuracy of their estimates. In this paper we propose an off-policy method for computing a lower confidence bound on the expected return of a policy.", "title": "" }, { "docid": "41e01174ce8222a950c789f1e1eafec8", "text": "In the online job recruitment domain, accurate classification of jobs and resumes to occupation categories is important for matching job seekers with relevant jobs. An example of such a job title classification system is an automatic text document classification system that utilizes machine learning. Machine learning-based document classification techniques for images, text and related entities have been well researched in academia and have also been successfully applied in many industrial settings. In this paper we present Carotene, a machine learning-based semi-supervised job title classification system that is currently in production at CareerBuilder. Carotene leverages a varied collection of classification and clustering tools and techniques to tackle the challenges of designing a scalable classification system for a large taxonomy of job categories. It encompasses these techniques in a cascade classifier architecture. We first present the architecture of Carotene, which consists of a two-stage coarse and fine level classifier cascade. We compare Carotene to an early version that was based on a flat classifier architecture and also compare and contrast Carotene with a third party occupation classification system. The paper concludes by presenting experimental results on real world industrial data using both machine learning metrics and actual user experience surveys.", "title": "" } ]
scidocsrr
38e31edadfe021a3762295e564d16303
D Mapping with an RGB-D Camera
[ { "docid": "59494d2a19ea2167f4095807ded28d67", "text": "This paper describes extensions to the Kintinuous [1] algorithm for spatially extended KinectFusion, incorporating the following additions: (i) the integration of multiple 6DOF camera odometry estimation methods for robust tracking; (ii) a novel GPU-based implementation of an existing dense RGB-D visual odometry algorithm; (iii) advanced fused realtime surface coloring. These extensions are validated with extensive experimental results, both quantitative and qualitative, demonstrating the ability to build dense fully colored models of spatially extended environments for robotics and virtual reality applications while remaining robust against scenes with challenging sets of geometric and visual features.", "title": "" } ]
[ { "docid": "cb0efa2b1e41898bc644fa8c2bc07fc7", "text": "As compatible meshes play important roles in many computer-aided design applications, we present a new approach for modelling compatible meshes. Our compatible mesh modelling method is derived from the skin algorithm [Markosian et al. 1999] which conducts an active particle-based mesh surface to approximate the given models serving as skeletons. To construct compatible meshes, we developed a duplicate-skins algorithm to simultaneously grow two skins with identical connectivity over two skeleton models; therefore, the resultant skin meshes are compatible. Our duplicate-skins algorithm has less topological constraints on the input models: multiple polygonal models, models with ill-topology meshes, or even point clouds could all be employed as skeletons to model compatible meshes. Based on the results of our duplicate-skins algorithm, the modelling method of n-Ary compatible meshes is also developed in this paper.", "title": "" }, { "docid": "5c8ed4f3831ce864cbdaea07171b5a57", "text": "Hyper-beta-alaninemia is a rare metabolic condition that results in elevated plasma and urinary β-alanine levels and is characterized by neurotoxicity, hypotonia, and respiratory distress. It has been proposed that at least some of the symptoms are caused by oxidative stress; however, only limited information is available on the mechanism of reactive oxygen species generation. The present study examines the hypothesis that β-alanine reduces cellular levels of taurine, which are required for normal respiratory chain function; cellular taurine depletion is known to reduce respiratory function and elevate mitochondrial superoxide generation. To test the taurine hypothesis, isolated neonatal rat cardiomyocytes and mouse embryonic fibroblasts were incubated with medium lacking or containing β-alanine. β-alanine treatment led to mitochondrial superoxide accumulation in conjunction with a decrease in oxygen consumption. The defect in β-alanine-mediated respiratory function was detected in permeabilized cells exposed to glutamate/malate but not in cells utilizing succinate, suggesting that β-alanine leads to impaired complex I activity. Taurine treatment limited mitochondrial superoxide generation, supporting a role for taurine in maintaining complex I activity. Also affected by taurine is mitochondrial morphology, as β-alanine-treated fibroblasts undergo fragmentation, a sign of unhealthy mitochondria that is reversed by taurine treatment. If left unaltered, β-alanine-treated fibroblasts also undergo mitochondrial apoptosis, as evidenced by activation of caspases 3 and 9 and the initiation of the mitochondrial permeability transition. Together, these data show that β-alanine mediates changes that reduce ATP generation and enhance oxidative stress, factors that contribute to heart failure.", "title": "" }, { "docid": "1f45d589a42815614d48d20b4ca4abb6", "text": "The modification of the conventional helical antenna by two pitch angles and a truncated cone reflector was analyzed. Limits of the axial radiation mode were examined by criteria defined with axial ratio, HPBW and SLL of the antenna. Gain increase was achieved but the bandwidth of the axial radiation mode remained almost the same. The practical adjustment was made on helical antenna with dielectric cylinder and measured in a laboratory. The measurement results confirmed the improvement of the conventional antenna in terms of gain increase.", "title": "" }, { "docid": "b79fb02d0b89d288b1733c3194e304ec", "text": "In this paper, the idea of a Prepaid energy meter using an AT89S52 microcontroller has been introduced. This concept provides a cost efficient manner of electricity billing. The present energy billing systems are discrete, inaccurate, costly and slow. They are also time and labour consuming. The major drawback of traditional billing system is power and energy theft. This drawback is reduced by using a prepaid energy meter which is based on the concept “Pay first and then use it”. Prepaid energy meter also reduces the error made by humans while taking readings to a large extent and there is no need to take reading in it. The prepaid energy meter uses a recharge card which is available in various ranges (i.e. Rs. 50, Rs. 100, Rs. 200, etc.). The recharge is done by using a keypad and the meter is charged with the amount. According to the power consumption, the amount will be reduced. An LDR (light Dependant Resistor) circuit counts the amount of energy consumed and displays the remaining amount of energy on the LCD. A relay system has been used which shut down or disconnect the energy meter and load through supply mains when the recharge amount is depleted. A buzzer is used as an alarm which starts before the recharge amount reaches a minimum value.", "title": "" }, { "docid": "79729b8f7532617015cbbdc15a876a5c", "text": "We introduce recurrent neural networkbased Minimum Translation Unit (MTU) models which make predictions based on an unbounded history of previous bilingual contexts. Traditional back-off n-gram models suffer under the sparse nature of MTUs which makes estimation of highorder sequence models challenging. We tackle the sparsity problem by modeling MTUs both as bags-of-words and as a sequence of individual source and target words. Our best results improve the output of a phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.5 BLEU, and we outperform the traditional n-gram based MTU approach by up to 0.8 BLEU.", "title": "" }, { "docid": "4d987e2c0f3f49609f70149460201889", "text": "Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation is riddled with many challenges such as occlusions, non-uniform density, intra-scene and inter-scene variations in scale and perspective. Nevertheless, over the last few years, crowd count analysis has evolved from earlier methods that are often limited to small variations in crowd density and scales to the current state-of-the-art methods that have developed the ability to perform successfully on a wide range of scenarios. The success of crowd counting methods in the recent years can be largely attributed to deep learning and publications of challenging datasets. In this paper, we provide a comprehensive survey of recent Convolutional Neural Network (CNN) based approaches that have demonstrated significant improvements over earlier methods that rely largely on hand-crafted representations. First, we briefly review the pioneering methods that use hand-crafted representations and then we delve in detail into the deep learning-based approaches and recently published datasets. Furthermore, we discuss the merits and drawbacks of existing CNN-based approaches and identify promising avenues of research in this rapidly evolving field. c © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "927d4288f98241e9d394f3d9c4a861c0", "text": "Phytophthora capsici is a devastating disease of pepper (Capsicum sp.) in Taiwan causing complete loss of commercial fields. The objective of this study was to characterize genetic diversity for 38 newly collected isolates and three historical isolates. Analysis of data includes whole genome sequence for two new isolates and for two isolates collected previously in 1987 and 1995. In addition, 63 single nucleotide polymorphism loci were genotyped using targeted-sequencing, revealing 27 unique genotypes. Genotypes fell into three genetic groups: two of the groups contain 90% (n = 33) of the 2016 isolates, are triploid (or higher), are exclusively the A2 mating type and appear to be two distinct clonal lineages. The isolates from 2016 that grouped with the historical isolates are diploid and the A1 mating type. Whole genome sequence revealed that ploidy varies by linkage group, and it appears the A2 clonal lineages may have switched mating type due to increased ploidy. Most of the isolates were recently race-typed on a set of differential C. annuum, and although there was no direct correlation between virulence and ploidy, many of the triploid isolates were less virulent as compared to the historical diploid isolates. The implications for breeding resistant pepper and conducting population analyses are discussed.", "title": "" }, { "docid": "7d391483dfe60f4ad60735264a0b7ab2", "text": "The growing interest and the market for indoor Location Based Service (LBS) have been drivers for a huge demand for building data and reconstructing and updating of indoor maps in recent years. The traditional static surveying and mapping methods can't meet the requirements for accuracy, efficiency and productivity in a complicated indoor environment. Utilizing a Simultaneous Localization and Mapping (SLAM)-based mapping system with ranging and/or camera sensors providing point cloud data for the maps is an auspicious alternative to solve such challenges. There are various kinds of implementations with different sensors, for instance LiDAR, depth cameras, event cameras, etc. Due to the different budgets, the hardware investments and the accuracy requirements of indoor maps are diverse. However, limited studies on evaluation of these mapping systems are available to offer a guideline of appropriate hardware selection. In this paper we try to characterize them and provide some extensive references for SLAM or mapping system selection for different applications. Two different indoor scenes (a L shaped corridor and an open style library) were selected to review and compare three different mapping systems, namely: (1) a commercial Matterport system equipped with depth cameras; (2) SLAMMER: a high accuracy small footprint LiDAR with a fusion of hector-slam and graph-slam approaches; and (3) NAVIS: a low-cost large footprint LiDAR with Improved Maximum Likelihood Estimation (IMLE) algorithm developed by the Finnish Geospatial Research Institute (FGI). Firstly, an L shaped corridor (2nd floor of FGI) with approximately 80 m length was selected as the testing field for Matterport testing. Due to the lack of quantitative evaluation of Matterport indoor mapping performance, we attempted to characterize the pros and cons of the system by carrying out six field tests with different settings. The results showed that the mapping trajectory would influence the final mapping results and therefore, there was optimal Matterport configuration for better indoor mapping results. Secondly, a medium-size indoor environment (the FGI open library) was selected for evaluation of the mapping accuracy of these three indoor mapping technologies: SLAMMER, NAVIS and Matterport. Indoor referenced maps were collected with a small footprint Terrestrial Laser Scanner (TLS) and using spherical registration targets. The 2D indoor maps generated by these three mapping technologies were assessed by comparing them with the reference 2D map for accuracy evaluation; two feature selection methods were also utilized for the evaluation: interactive selection and minimum bounding rectangles (MBRs) selection. The mapping RMS errors of SLAMMER, NAVIS and Matterport were 2.0 cm, 3.9 cm and 4.4 cm, respectively, for the interactively selected features, and the corresponding values using MBR features were 1.7 cm, 3.2 cm and 4.7 cm. The corresponding detection rates for the feature points were 100%, 98.9%, 92.3% for the interactive selected features and 100%, 97.3% and 94.7% for the automated processing. The results indicated that the accuracy of all the evaluated systems could generate indoor map at centimeter-level, but also variation of the density and quality of collected point clouds determined the applicability of a system into a specific LBS.", "title": "" }, { "docid": "6b125ab0691988a5836855346f277970", "text": "Cardol (C₁₅:₃), isolated from cashew (Anacardium occidentale L.) nut shell liquid, has been shown to exhibit bactericidal activity against various strains of Staphylococcus aureus, including methicillin-resistant strains. The maximum level of reactive oxygen species generation was detected at around the minimum bactericidal concentration of cardol, while reactive oxygen species production drastically decreased at doses above the minimum bactericidal concentration. The primary response for bactericidal activity around the bactericidal concentration was noted to primarily originate from oxidative stress such as intracellular reactive oxygen species generation. High doses of cardol (C₁₅:₃) were shown to induce leakage of K⁺ from S. aureus cells, which may be related to the decrease in reactive oxygen species. Antioxidants such as α-tocopherol and ascorbic acid restricted reactive oxygen species generation and restored cellular damage induced by the lipid. Cardol (C₁₅:₃) overdose probably disrupts the native membrane-associated function as it acts as a surfactant. The maximum antibacterial activity of cardols against S. aureus depends on their log P values (partition coefficient in octanol/water) and is related to their similarity to those of anacardic acids isolated from the same source.", "title": "" }, { "docid": "e08bc715d679ba0442883b4b0e481998", "text": "Rheology, as a branch of physics, studies the deformation and flow of matter in response to an applied stress or strain. According to the materials’ behaviour, they can be classified as Newtonian or non-Newtonian (Steffe, 1996; Schramm, 2004). The most of the foodstuffs exhibit properties of non-Newtonian viscoelastic systems (Abang Zaidel et al., 2010). Among them, the dough can be considered as the most unique system from the point of material science. It is viscoelastic system which exhibits shear-thinning and thixotropic behaviour (Weipert, 1990). This behaviour is the consequence of dough complex structure in which starch granules (75-80%) are surrounded by three-dimensional protein (20-25%) network (Bloksma, 1990, as cited in Weipert, 2006). Wheat proteins are consisted of gluten proteins (80-85% of total wheat protein) which comprise of prolamins (in wheat gliadins) and glutelins (in wheat glutenins) and non gluten proteins (15-20% of the total wheat proteins) such as albumins and globulins (Veraverbeke & Delcour, 2002). Gluten complex is a viscoelastic protein responsible for dough structure formation. Among the cereal technologists, rheology is widely recognized as a valuable tool in quality assessment of flour. Hence, in the cereal scientific community, rheological measurements are generally employed throughout the whole processing chain in order to monitor the mechanical properties, molecular structure and composition of the material, to imitate materials’ behaviour during processing and to anticipate the quality of the final product (Dobraszczyk & Morgenstern, 2003). Rheology is particularly important technique in revealing the influence of flour constituents and additives on dough behaviour during breadmaking. There are many test methods available to measure rheological properties, which are commonly divided into empirical (descriptive, imitative) and fundamental (basic) (Scott Blair, 1958 as cited in Weipert, 1990). Although being criticized due to their shortcomings concerning inflexibility in defining the level of deforming force, usage of strong deformation forces, interpretation of results in relative non-SI units, large sample requirements and its impossibility to define rheological parameters such as stress, strain, modulus or viscosity (Weipert, 1990; Dobraszczyk & Morgenstern, 2003), empirical rheological measurements are still indispensable in the cereal quality laboratories. According to the empirical rheological parameters it is possible to determine the optimal flour quality for a particular purpose. The empirical techniques used for dough quality", "title": "" }, { "docid": "d8a84b17e1ba7c96acf13e31130ecd02", "text": "On April 15, 2007 the scientific world has commemorated Leonhard Euler's 300 th birthday. Euler's eminent work has become famous in many fields: Mathematics, mechanics, optics, acoustics, astronomy and geodesy, even in the theory of music. This article will recall his no less distinguished contributions to the founding of the modern theory of ships. These are not so widely known to the general professional public. In laying these foundations in ship theory like in other fields Euler was seeking \" first principles, generality, order and above all clarity \". This article will highlight those achievements for which we owe him our gratitude. There is no doubt that Leonhard Euler was one of the founders of the modern theory of ships. He raised many fundamental questions for the first time and through all phases of his professional lifetime devoted himself to subjects of ship theory. Thereby he gave a unique profile to this still nascent scientific discipline. Many of his approaches have been of lasting, incisive influence on the structure of this field. Some of his ideas have become so much a matter of routine today that we have forgotten their descent from Euler. This article will synoptically review Euler's contributions to the foundation of this discipline, will correlate them with the stages of Euler's own scientific development, embedded in the rich environment of scientific enlightenment in the 18th c., and will appreciate the value of his lasting aftereffects until today. The same example will serve to recognize the fertile field of tension always existing between Euler's fundamental orientation and his desire to make contributions to practical applications, which has remained characteristic of ship theory to the present day. Without claiming completeness in detail this article aims at giving a coherent overview of Euler's approaches and objectives in this discipline. This synopsis will be presented primarily from the viewpoint of engineering science in its current stage of development.", "title": "" }, { "docid": "2a7433cf92c8f845c951114eca8ce192", "text": "A through-dielectric switched-antenna-array radar imaging system is shown that produces near real-time imagery of targets on the opposite side of a lossy dielectric slab. This system operates at S-band, provides a frame rate of 0.5 Hz, and operates at a stand-off range of 6 m or greater. The antenna array synthesizes 44 effective phase centers in a linear array providing $\\lambda/2$ element-to-element spacing by time division multiplexing the radar's transmit and receive ports between 8 receive elements and 13 transmit elements, producing 2D (range vs. cross-range) imagery of what is behind a slab. Laboratory measurements agree with simulations, the air-slab interface is range gated out of the image, and target scenes consisting of cylinders and soda cans are imaged through the slab. A 2D model of a slab, a cylinder, and phase centers shows that blurring due to the slab and bistatic phase centers on the array is negligible when the radar sensor is located at stand-off ranges of 6 m or greater.", "title": "" }, { "docid": "f4b271c7ee8bfd9f8aa4d4cf84c4efd4", "text": "Today, and possibly for a long time to come, the full driving task is too complex an activity to be fully formalized as a sensing-acting robotics system that can be explicitly solved through model-based and learning-based approaches in order to achieve full unconstrained vehicle autonomy. Localization, mapping, scene perception, vehicle control, trajectory optimization, and higher-level planning decisions associated with autonomous vehicle development remain full of open challenges. This is especially true for unconstrained, real-world operation where the margin of allowable error is extremely small and the number of edge-cases is extremely large. Until these problems are solved, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. The governing objectives of the MIT Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale real-world driving data collection that includes high-definition video to fuel the development of deep learning based internal and external perception systems, (2) gain a holistic understanding of how human beings interact with vehicle automation technology by integrating video data with vehicle state data, driver characteristics, mental models, and self-reported experiences with technology, and (3) identify how technology and other factors related to automation adoption and use can be improved in ways that save lives. In pursuing these objectives, we have instrumented 21 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 99 participants, 11,846 days of participation, 405,807 miles, and 5.5 billion video frames. This paper presents the design of the study, the data collection hardware, the processing of the data, and the computer vision algorithms currently being used to extract actionable knowledge from the data. MIT Autonomous Vehicle", "title": "" }, { "docid": "fa015edda10d1eaaad3517e8abb5729c", "text": "The most common methods for localization of radio frequency transmitters are based on two processing steps. In the first step, parameters such as angle of arrival or time of arrival are estimated at each base station independently. In the second step, the estimated parameters are used to determine the location of the transmitters. The direct position determination approach advocates using the observations from all the base stations together in order to estimate the locations in a single step. This single-step method is known to outperform two-step methods when the signal-to-noise ratio is low. In this paper, we propose a direct-position-determination-based method for localization of multiple emitters that transmit unknown signals. The method does not require knowledge of the number of emitters. It is based on minimum-variance-distortionless-response considerations to achieve a high resolution estimator that requires only a two-dimensional search for planar geometry, and a three-dimensional search for the general case.", "title": "" }, { "docid": "b9671707763d883e0c1855a2648713fd", "text": "Durch die immer starker wachsenden Datenberge stößt der klassische Data Warehouse-Ansatz an seine Grenzen, weil er in Punkto Schnelligkeit, Datenvolumen und Auswertungsmöglichkeiten nicht mehr mithalten kann. Neue Big Data-Technologien wie analytische Datenbanken, NoSQL-Datenbanken oder Hadoop versprechen Abhilfe, haben aber einige Nachteile: Während sich analytische Datenbanken nur unzureichend mit anderen Datenquellen integrieren lassen, reichen die Abfragesprachen von NoSQL-Datenbanken nicht an die Möglichkeiten von SQL heran. Die Einführung von Hadoop erfordert wiederum den aufwändigen Aufbau von Knowhow im Unternehmen. Durch eine geschickte Kombination des Data Warehouse-Konzepts mit modernen Big Data-Technologien lassen sich diese Schwierigkeiten überwinden: Die Data Marts, auf die analytische Datenbanken zugreifen, können aus dem Data Warehouse gespeist werden. Die Vorteile von NoSQL lassen sich in den Applikationsdatenbanken nutzen, während die Daten für die Analysen in das Data Warehouse geladen werden, wo die relationalen Datenbanken ihre Stärken ausspielen. Die Ergebnisse von Hadoop-Transaktionen schließlich lassen sich sehr gut in einem Data Warehouse oder in Data Marts ablegen, wo sie einfach über eine Data-Warehouse-Plattform ausgewertet werden können, während die Rohdaten weiterhin bei Hadoop verbleiben. Zudem unterstützt Hadoop auch Werkzeuge fur einen performanten SQL-Zugriff. Der Artikel beschreibt, wie aus altem Data Warehouse-Konzept und modernen Technologien die „neue Realität“ entsteht und illustriert dies an verschiedenen Einsatzszenarien.", "title": "" }, { "docid": "f6fa1c4ce34f627d9d7d1ca702272e26", "text": "One of the most difficult aspects in rhinoplasty is resolving and preventing functional compromise of the nasal valve area reliably. The nasal valves are crucial for the individual breathing competence of the nose. Structural and functional elements contribute to this complex system: the nasolabial angle, the configuration and stability of the alae, the function of the internal nasal valve, the anterior septum symmetrically separating the bilateral airways and giving structural and functional support to the alar cartilage complex and to their junction with the upper lateral cartilages, the scroll area. Subsequently, the open angle between septum and sidewalls is important for sufficient airflow as well as the position and function of the head of the turbinates. The clinical examination of these elements is described. Surgical techniques are more or less well known and demonstrated with patient examples and drawings: anterior septoplasty, reconstruction of tip and dorsum support by septal extension grafts and septal replacement, tip suspension and lateral crural sliding technique, spreader grafts and suture techniques, splay grafts, alar batten grafts, lateral crural extension grafts, and lateral alar suspension. The numerous literature is reviewed.", "title": "" }, { "docid": "9206f96ca91ea4855ceab8d59d3e68ad", "text": "INTRODUCTION\n5α-Reductase inhibitors (5ARIs) are widely used for the treatment of benign prostatic hyperplasia (BPH) and androgenetic alopecia (AGA).\n\n\nAIM\nTo review all the available data on the effect of 5ARIs on sexual function and assess whether 5ARIs increase the risk of sexual dysfunction.\n\n\nMETHODS\nA systematic search of the literature was conducted using the Medline, Embase, and Cochrane databases. The search was limited to articles published in English and up to October 2015. Article selection proceeded according to the search strategy based on Preferred Reporting Items for Systematic Reviews and Meta-analyses criteria. Data were analyzed using Stata 12.0. A fixed- or a random-effects model was used to calculate the overall combined risk estimates. Publication bias was assessed using Begg and Egger tests.\n\n\nMAIN OUTCOME MEASURES\nSexual dysfunction, erectile dysfunction, and decreased libido.\n\n\nRESULTS\nAfter screening 493 articles, 17 randomized controlled trials with 17,494 patients were included. Nine studies evaluated the efficacy of 5ARIs in men with BPH. The other eight reported using 5ARIs in the treatment of men with AGA. The mean age of participants was 60.10 years across all studies. We included 10 trials (6,779 patients) on the efficacy and safety of finasteride, 4 trials (6,222 patients) on the safety and tolerability of dutasteride, and 3 trials (4,493 patients) using finasteride and dutasteride for AGA. The pooled relative risks for sexual dysfunction were 2.56 (95% CI = 1.48-4.42) in men with BPH and 1.21 (95% CI = 0.85-1.72) in men with AGA; those for erectile dysfunction were 1.55 (95% CI = 1.14-2.12) in men with BPH and 0.66 (95% CI = 0.20-2.25) in men with AGA; and those for decreased libido were 1.69 (95% CI = 1.03-2.79) in men with BPH and 1.16 (95% CI = 0.50-2.72) in men with AGA. Estimates of the total effects were generally consistent with the sensitivity analysis. No evidence of publication bias was observed.\n\n\nCONCLUSION\nEvidence from the randomized controlled trials suggested that 5ARIs were associated with increased adverse effects on sexual function in men with BPH compared with placebo. However, the association was not statistically significant in men with AGA. Well-designed randomized controlled trials are indicated to study further the mechanism and effects of 5ARIs on sexual function.", "title": "" }, { "docid": "018d855cdd9a5e95beba0ae39dddf4ce", "text": "Citation Agrawal, Ajay K., Catalini, Christian, and Goldfarb, Avi. \"Some Simple Economics of Crowdfunding.\" Innovation Policy and the Economy 2013, ed. Josh Lerner and Scott Stern, Univeristy of Chicago Press, 2014, 1-47. © 2014 National Bureau of Economic Research Innovation Policy and the Economy As Published http://press.uchicago.edu/ucp/books/book/distributed/I/bo185081 09.html Publisher University of Chicago Press", "title": "" }, { "docid": "e483d914e00fa46a6be188fabd396165", "text": "Assessing distance betweeen the true and the sample distribution is a key component of many state of the art generative models, such as Wasserstein Autoencoder (WAE). Inspired by prior work on Sliced-Wasserstein Autoencoders (SWAE) and kernel smoothing we construct a new generative model – Cramer-Wold AutoEncoder (CWAE). CWAE cost function, based on introduced Cramer-Wold distance between samples, has a simple closed-form in the case of normal prior. As a consequence, while simplifying the optimization procedure (no need of sampling necessary to evaluate the distance function in the training loop), CWAE performance matches quantitatively and qualitatively that of WAE-MMD (WAE using maximum mean discrepancy based distance function) and often improves upon SWAE.", "title": "" }, { "docid": "7775c00550a6042c38f38bac257ec334", "text": "Real-world face recognition datasets exhibit long-tail characteristics, which results in biased classifiers in conventionally-trained deep neural networks, or insufficient data when long-tail classes are ignored. In this paper, we propose to handle long-tail classes in the training of a face recognition engine by augmenting their feature space under a center-based feature transfer framework. A Gaussian prior is assumed across all the head (regular) classes and the variance from regular classes are transferred to the long-tail class representation. This encourages the long-tail distribution to be closer to the regular distribution, while enriching and balancing the limited training data. Further, an alternating training regimen is proposed to simultaneously achieve less biased decision boundaries and a more discriminative feature representation. We conduct empirical studies that mimic long-tail datasets by limiting the number of samples and the proportion of long-tail classes on the MS-Celeb-1M dataset. We compare our method with baselines not designed to handle long-tail classes and also with state-of-the-art methods on face recognition benchmarks. State-of-the-art results on LFW, IJB-A and MS-Celeb-1M datasets demonstrate the effectiveness of our feature transfer approach and training strategy. Finally, our feature transfer allows smooth visual interpolation, which demonstrates disentanglement to preserve identity of a class while augmenting its feature space with non-identity variations.", "title": "" } ]
scidocsrr
4cd68c858843f24adc2adfdad8a37f23
Prime Object Proposals with Randomized Prim's Algorithm
[ { "docid": "28fd803428e8f40a4627e05a9464e97b", "text": "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "title": "" }, { "docid": "f9c8209fcecbbed99aa29761dffc8e25", "text": "ImageNet is a large-scale database of object classes with millions of images. Unfortunately only a small fraction of them is manually annotated with bounding-boxes. This prevents useful developments, such as learning reliable object detectors for thousands of classes. In this paper we propose to automatically populate ImageNet with many more bounding-boxes, by leveraging existing manual annotations. The key idea is to localize objects of a target class for which annotations are not available, by transferring knowledge from related source classes with available annotations. We distinguish two kinds of source classes: ancestors and siblings. Each source provides knowledge about the plausible location, appearance and context of the target objects, which induces a probability distribution over windows in images of the target class. We learn to combine these distributions so as to maximize the location accuracy of the most probable window. Finally, we employ the combined distribution in a procedure to jointly localize objects in all images of the target class. Through experiments on 0.5 million images from 219 classes we show that our technique (i) annotates a wide range of classes with bounding-boxes; (ii) effectively exploits the hierarchical structure of ImageNet, since all sources and types of knowledge we propose contribute to the results; (iii) scales efficiently.", "title": "" } ]
[ { "docid": "47afccb5e7bcdade764666f3b5ab042e", "text": "Social media comprises interactive applications and platforms for creating, sharing and exchange of user-generated contents. The past ten years have brought huge growth in social media, especially online social networking services, and it is changing our ways to organize and communicate. It aggregates opinions and feelings of diverse groups of people at low cost. Mining the attributes and contents of social media gives us an opportunity to discover social structure characteristics, analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human related events. In this paper, we firstly discuss the realms which can be predicted with current social media, then overview available predictors and techniques of prediction, and finally discuss challenges and possible future directions.", "title": "" }, { "docid": "54f691d51aeaea18f2f7830e36af820f", "text": "The Internet of Things (IoT) many be thought of as the availability of physical objects, or devices, on the Internet [1]. Given such an arrangement it is possible to access sensor data and control actuators remotely. Furthermore such data may be combined with data from other sources - e.g. with data that is contained in the Web - or operated on by cloud based services to create applications far richer than can be provided by isolated embedded systems [2,3]. This is the vision of the Internet of Things. We present a cloud-compatible open source controller and an extensible API, hereafter referred to as `IoTCloud', which enables developers to create scalable high performance IoT and sensor-centric applications. The IoTCloud software is written in Java and built on popular open source packages such as Apache Active MQ [4] and JBoss Netty [5]. We present an overview of the IoT Cloud architecture and describe its developer API. Next we introduce the FutureGrid - a geographically distributed and heterogeneous cloud test-bed [6,7] - used in our experiments. Our preliminary results indicate that a distributed cloud infrastructure like the FutureGrid coupled with our flexible IoTCloud framework is an environment suitable for the study and development of IoT and sensor-centric applications. We also report on our initial study of certain measured characteristics of an IoTCloud application running on the FutureGrid. We conclude by inviting interested parties to use the IoTCloud to create their own IoT applications or contribute to its further development.", "title": "" }, { "docid": "d90467d05b4df62adc94b7c150013968", "text": "Bacterial flagella and type III secretion system (T3SS) are evolutionarily related molecular transport machineries. Flagella mediate bacterial motility; the T3SS delivers virulence effectors to block host defenses. The inflammasome is a cytosolic multi-protein complex that activates caspase-1. Active caspase-1 triggers interleukin-1β (IL-1β)/IL-18 maturation and macrophage pyroptotic death to mount an inflammatory response. Central to the inflammasome is a pattern recognition receptor that activates caspase-1 either directly or through an adapter protein. Studies in the past 10 years have established a NAIP-NLRC4 inflammasome, in which NAIPs are cytosolic receptors for bacterial flagellin and T3SS rod/needle proteins, while NLRC4 acts as an adapter for caspase-1 activation. Given the wide presence of flagella and the T3SS in bacteria, the NAIP-NLRC4 inflammasome plays a critical role in anti-bacteria defenses. Here, we review the discovery of the NAIP-NLRC4 inflammasome and further discuss recent advances related to its biochemical mechanism and biological function as well as its connection to human autoinflammatory disease.", "title": "" }, { "docid": "1fba985f04f5dc12954ad61577745b6b", "text": "The ability to identify user attributes such as gender, age, regional origin, and political orientation solely from user language in social media such as Twitter or similar highly informal content has important applications in advertising, personalization, and recommendation. This paper includes a novel investigation of stacked-SVM-based classification algorithms over a rich set of original features, applied to classifying these four user attributes. We propose new sociolinguisticsbased features for classifying user attributes in Twitter-style informal written genres, as distinct from the other primarily spoken genres previously studied in the user-property classification literature. Our models, singly and in ensemble, significantly outperform baseline models in all cases.", "title": "" }, { "docid": "59375cb8654df781e88c033f24fdb94f", "text": "Cultural Heritage represents a world wide resource of inestimable value, attracting millions of visitors every year to monuments, museums and art exhibitions. A fundamental aspect of this resource is represented by its fruition and promotion. Indeed, to achieve a fruition of a cultural space that is sustainable, it is necessary to realize smart solutions for visitors' interaction to enrich their visiting experience. In this paper we present a service-oriented framework aimed to transform indoor Cultural Heritage sites in smart environments, which enforces a set of multimedia and communication services to support the changing of these spaces in an indispensable dynamic instrument for knowledge, fruition and growth for all the people. Following the Internet of Things paradigm, the proposed framework relies on the integration of a Wireless Sensor Network (WSN) with Wi-Fi and Bluetooth technologies to identify, locate and support visitors equipped with their own mobile devices.", "title": "" }, { "docid": "1f52dc0ee257b56b24c49b9520cf38da", "text": "We extend approaches for skinning characters to the general setting of skinning deformable mesh animations. We provide an automatic algorithm for generating progressive skinning approximations, that is particularly efficient for pseudo-articulated motions. Our contributions include the use of nonparametric mean shift clustering of high-dimensional mesh rotation sequences to automatically identify statistically relevant bones, and robust least squares methods to determine bone transformations, bone-vertex influence sets, and vertex weight values. We use a low-rank data reduction model defined in the undeformed mesh configuration to provide progressive convergence with a fixed number of bones. We show that the resulting skinned animations enable efficient hardware rendering, rest pose editing, and deformable collision detection. Finally, we present numerous examples where skins were automatically generated using a single set of parameter values.", "title": "" }, { "docid": "58858f0cd3561614f1742fe7b0380861", "text": "This study focuses on how technology can encourage and ease awkwardness-free communications between people in real-world scenarios. We propose a device, The Wearable Aura, able to project a personalized animation onto one's Personal Distance zone. This projection, as an extension of one-self is reactive to user's cognitive status, aware of its environment, context and user's activity. Our user study supports the idea that an interactive projection around an individual can indeed benefit the communications with other individuals.", "title": "" }, { "docid": "65271fcf27d43ef88910e0a872eec0b9", "text": "Purpose – The purpose of this paper is to investige whether online environment cues (web site quality and web site brand) affect customer purchase intention towards an online retailer and whether this impact is mediated by customer trust and perceived risk. The study also aimed to assess the degree of reciprocity between consumers’ trust and perceived risk in the context of an online shopping environment. Design/methodology/approach – The study proposed a research framework for testing the relationships among the constructs based on the stimulus-organism-response framework. In addition, this study developed a non-recursive model. After the validation of measurement scales, empirical analyses were performed using structural equation modelling. Findings – The findings confirm that web site quality and web site brand affect consumers’ trust and perceived risk, and in turn, consumer purchase intention. Notably, this study finds that the web site brand is a more important cue than web site quality in influencing customers’ purchase intention. Furthermore, the study reveals that the relationship between trust and perceived risk is reciprocal. Research limitations/implications – This study adopted four dimensions – technical adequacy, content quality, specific content and appearance – to measure web site quality. However, there are still many competing concepts regarding the measurement of web site quality. Further studies using other dimensional measures may be needed to verify the research model. Practical implications – Online retailers should focus their marketing strategies more on establishing the brand of the web site rather than improving the functionality of the web site. Originality/value – This study proposed a non-recursive model for empirically analysing the link between web site quality, web site brand, trust, perceived risk and purchase intention towards the online retailer.", "title": "" }, { "docid": "b2b40bf9ee148df9973fd9f69b80ff57", "text": "Convolutional Neural Networks (CNNs) are becoming increasingly popular due to their superior performance in the domain of computer vision, in applications such as objection detection and recognition. However, they demand complex, power-consuming hardware which makes them unsuitable for implementation on low-power mobile and embedded devices. In this paper, a description and comparison of various techniques is presented which aim to mitigate this problem. This is primarily achieved by quantizing the floating-point weights and activations to reduce the hardware requirements, and adapting the training and inference algorithms to maintain the network’s performance.", "title": "" }, { "docid": "299d59735ea1170228aff531645b5d4a", "text": "While the economic case for cloud computing is compelling, the security challenges it poses are equally striking. In this work we strive to frame the full space of cloud-computing security issues, attempting to separate justified concerns from possible over-reactions. We examine contemporary and historical perspectives from industry, academia, government, and “black hats”. We argue that few cloud computing security issues are fundamentally new or fundamentally intractable; often what appears “new” is so only relative to “traditional” computing of the past several years. Looking back further to the time-sharing era, many of these problems already received attention. On the other hand, we argue that two facets are to some degree new and fundamental to cloud computing: the complexities of multi-party trust considerations, and the ensuing need for mutual auditability.", "title": "" }, { "docid": "bf8fd75add18b5c70ef16a6f9a358742", "text": "Crowd behavior understanding is crucial yet challenging across a wide range of applications, since crowd behavior is inherently determined by a sequential decision-making process based on various factors, such as the pedestrians’ own destinations, interaction with nearby pedestrians and anticipation of upcoming events. In this paper, we propose a novel framework of Social-Aware Generative Adversarial Imitation Learning (SA-GAIL) to mimic the underlying decisionmaking process of pedestrians in crowds. Specifically, we infer the latent factors of human decision-making process in an unsupervised manner by extending the Generative Adversarial Imitation Learning framework to anticipate future paths of pedestrians. Different factors of human decision making are disentangled with mutual information maximization, with the process modeled by collision avoidance regularization and Social-Aware LSTMs. Experimental results demonstrate the potential of our framework in disentangling the latent decision-making factors of pedestrians and stronger abilities in predicting future trajectories.", "title": "" }, { "docid": "135e3fa3b9487255b6ee67465b645fc9", "text": "In the past few decades, the concepts of personalization in the forms of recommender system, information filtering, or customization not only are quickly accepted by the public but also draw considerable attention from enterprises. Therefore, a number of studies based on personalized recommendations have subsequently been produced. Most of these studies apply on E-commerce, website, and information, and some of them apply on teaching, tourism, and TV programs. Because the recent rise of Web 3.0 emphasizes on providing more complete personal information and service through the efficient method, the recommender application gradually develops towards mobile commerce, mobile information, or social network. Many studies have adopted Content-Based (CB), Collaborative Filtering (CF), and hybrid approach as the main recommender style in the analysis. There are few or even no studies that have emphasized on the review of recommendation recently. For this reason, this study aims to collect, analyze, and review the research topics of recommender systems and their application in the past few decades. This study collects the research types and from various researchers. The literature arrangement of this study can help researchers to understand the recommender system researches in a clear sense and in a short time.", "title": "" }, { "docid": "6572c7d33fcb3f1930a41b4b15635ffe", "text": "Neurons in area MT (V5) are selective for the direction of visual motion. In addition, many are selective for the motion of complex patterns independent of the orientation of their components, a behavior not seen in earlier visual areas. We show that the responses of MT cells can be captured by a linear-nonlinear model that operates not on the visual stimulus, but on the afferent responses of a population of nonlinear V1 cells. We fit this cascade model to responses of individual MT neurons and show that it robustly predicts the separately measured responses to gratings and plaids. The model captures the full range of pattern motion selectivity found in MT. Cells that signal pattern motion are distinguished by having convergent excitatory input from V1 cells with a wide range of preferred directions, strong motion opponent suppression and a tuned normalization that may reflect suppressive input from the surround of V1 cells.", "title": "" }, { "docid": "e4f648d12495a2d7615fe13c84f35bbe", "text": "We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation. Our approach requires no changes to the model architecture of a standard NMT system, but instead introduces a new component, the contextual parameter generator (CPG), that generates the parameters of the system (e.g., weights in a neural network). This parameter generator accepts source and target language embeddings as input, and generates the parameters for the encoder and the decoder, respectively. The rest of the model remains unchanged and is shared across all languages. We show how this simple modification enables the system to use monolingual data for training and also perform zero-shot translation. We further show it is able to surpass state-of-theart performance for both the IWSLT-15 and IWSLT-17 datasets and that the learned language embeddings are able to uncover interesting relationships between languages.", "title": "" }, { "docid": "f5a4863144a484d9f5c3b9cff96baf65", "text": "CLINICAL PRESENTATION A 14-year-old boy was referred to the Oral Medicine Clinic, School of Dentistry, Universidade Federal de Minas Gerais, for evaluation of a red nodular lesion of the palate. The lesion, which measured 10 10 5 mm, was located in the midline of the hard palate and was firm to palpation. A focal area of ulceration was noted in the center of the lesion (Fig. 1). The patient reported that the lesion had been present for 2 months and was somewhat painful, presumably secondary to the ulceration. No lymph nodes were palpable. The patient’s medical history was otherwise noncontributory. No osseous alterations were noted on occlusal radiograph (Fig. 2) or computerized tomography scanning.", "title": "" }, { "docid": "d5c3e1baa2425616154e9d5252e7d393", "text": "Article history: Available online 18 June 2010", "title": "" }, { "docid": "fba109e4627d4bb580d07368e3c00cc1", "text": "-Wheeled-tracked vehicles are undoubtedly the most popular means of transportation. However, these vehicles are mainly suitable for relatively flat terrain. Legged vehicles, on the other hand, have the potential to handle wide variety of terrain. Robug IIs is a legged climbing robot designed to work in relatively unstructured and rough terrain. It has the capability of walking, climbing vertical surfaces and performing autonomous floor to wall transfer. The sensing technique used in Robug IIs is mainly tactile and ultrasonic sensing. A set of reflexive rules have been developed for the robot to react to the uncertainty of the working environment. The robot also has the intelligence to seek and verify its own foot-holds. It is envisaged that the main application of robot is for remote inspection and maintenance in hazardous environments. Keywords—Legged robot, climbing service robot, insect inspired robot, pneumatic control, fuzzy logic.", "title": "" }, { "docid": "4493a071f0dbdf7464d7ad299fec97d3", "text": "Drawing upon self-determination theory, this study tested different types of behavioral regulation as parallel mediators of the association between the job’s motivating potential, autonomy-supportive leadership, and understanding the organization’s strategy, on the one hand, and job satisfaction, turnover intention, and two types of organizational citizenship behaviors (OCB), on the other hand. In particular, intrinsic motivation and identified regulation were contrasted as idiosyncratic motivational processes. Analyses were based on data from 201 employees in the Swiss insurance industry. Results supported both types of self-determined motivation as mediators of specific antecedent-outcome relationships. Identified regulation, for example, particularly mediated the impact of contextual antecedents on both civic virtue and altruism OCB. Overall, controlled types of behavioral regulation showed comparatively weak relations to antecedents or consequences. The unique characteristics of motivational processes and potential explanations for the weak associations of controlled motivation are discussed.", "title": "" }, { "docid": "3e01af44d4819d8c78615e66f56e5983", "text": "The amount of dynamic content on the web has been steadily increasing. Scripting languages such as JavaScript and browser extensions such as Adobe's Flash have been instrumental in creating web-based interfaces that are similar to those of traditional applications. Dynamic content has also become popular in advertising, where Flash is used to create rich, interactive ads that are displayed on hundreds of millions of computers per day. Unfortunately, the success of Flash-based advertisements and applications attracted the attention of malware authors, who started to leverage Flash to deliver attacks through advertising networks. This paper presents a novel approach whose goal is to automate the analysis of Flash content to identify malicious behavior. We designed and implemented a tool based on the approach, and we tested it on a large corpus of real-world Flash advertisements. The results show that our tool is able to reliably detect malicious Flash ads with limited false positives. We made our tool available publicly and it is routinely used by thousands of users.", "title": "" } ]
scidocsrr
30b3507888facf0d7287a9c605cb78a7
Real-time calculus for scheduling hard real-time systems
[ { "docid": "39321bc85746dc43736a0435c939c7da", "text": "We use recent network calculus results to study some properties of lossless multiplexing as it may be used in guaranteed service networks. We call network calculus a set of results that apply min-plus algebra to packet networks. We provide a simple proof that shaping a traffic stream to conform to a burstiness constraint preserves the original constraints satisfied by the traffic stream We show how all rate-based packet schedulers can be modeled with a simple rate latency service curve. Then we define a general form of deterministic effective bandwidth and equivalent capacity. We find that call acceptance regions based on deterministic criteria (loss or delay) are convex, in contrast to statistical cases where it is the complement of the region which is convex. We thus find that, in general, the limit of the call acceptance region based on statistical multiplexing when the loss probability target tends to 0 may be strictly larger than the call acceptance region based on lossless multiplexing. Finally, we consider the problem of determining the optimal parameters of a variable bit rate (VBR) connection when it is used as a trunk, or tunnel, given that the input traffic is known. We find that there is an optimal peak rate for the VBR trunk, essentially insensitive to the optimization criteria. For a linear cost function, we find an explicit algorithm for the optimal remaining parameters of the VBR trunk.", "title": "" } ]
[ { "docid": "ee865e3291eff95b5977b54c22b59f19", "text": "Fuzzing is a process where random, almost valid, input streams are automatically generated and fed into computer systems in order to test the robustness of userexposed interfaces. We fuzz the Linux kernel system call interface; unlike previous work that attempts to generically fuzz all of an operating system’s system calls, we explore the effectiveness of using specific domain knowledge and focus on finding bugs and security issues related to a single Linux system call. The perf event open() system call was introduced in 2009 and has grown to be a complex interface with over 40 arguments that interact in subtle ways. By using detailed knowledge of typical perf event usage patterns we develop a custom tool, perf fuzzer, that has found bugs that more generic, system-wide, fuzzers have missed. Numerous crashing bugs have been found, including a local root exploit. Fixes for these bugs have been merged into the main Linux source tree. Testing continues to find new bugs, although they are increasingly hard to isolate, requiring development of new isolation techniques and helper utilities. We describe the development of perf fuzzer, examine the bugs found, and discuss ways that this work can be extended to find more bugs and cover other system calls.", "title": "" }, { "docid": "3cf25855521eccfb51bdfebdd0f0a2fd", "text": "We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 × 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing/adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.", "title": "" }, { "docid": "457a23b087e59c6076ef6f9da7214fea", "text": "Supervised learning is widely used in training autonomous driving vehicle. However, it is trained with large amount of supervised labeled data. Reinforcement learning can be trained without abundant labeled data, but we cannot train it in reality because it would involve many unpredictable accidents. Nevertheless, training an agent with good performance in virtual environment is relatively much easier. Because of the huge difference between virtual and real, how to fill the gap between virtual and real is challenging. In this paper, we proposed a novel framework of reinforcement learning with image semantic segmentation network to make the whole model adaptable to reality. The agent is trained in TORCS, a car racing simulator.", "title": "" }, { "docid": "f2e13ac41fc61bfc1b8e9c7171608518", "text": "BACKGROUND\nThe exact anatomical cause of the tear trough remains undefined. This study was performed to identify the anatomical basis for the tear trough deformity.\n\n\nMETHODS\nForty-eight cadaveric hemifaces were dissected. With the skin over the midcheek intact, the tear trough area was approached through the preseptal space above and prezygomatic space below. The origins of the palpebral and orbital parts of the orbicularis oculi (which sandwich the ligament) were released meticulously from the maxilla, and the tear trough ligament was isolated intact and in continuity with the orbicularis retaining ligament. The ligaments were submitted for histologic analysis.\n\n\nRESULTS\nA true osteocutaneous ligament called the tear trough ligament was consistently found on the maxilla, between the palpebral and orbital parts of the orbicularis oculi, cephalad and caudal to the ligament, respectively. It commences medially, at the level of the insertion of the medial canthal tendon, just inferior to the anterior lacrimal crest, to approximately the medial-pupil line, where it continues laterally as the bilayered orbicularis retaining ligament. Histologic evaluation confirmed the ligamentous nature of the tear trough ligament, with features identical to those of the zygomatic ligament.\n\n\nCONCLUSIONS\nThis study clearly demonstrated that the prominence of the tear trough has its anatomical origin in the tear trough ligament. This ligament has not been isolated previously using standard dissection, but using the approach described, the tear trough ligament is clearly seen. The description of this ligament sheds new light on considerations when designing procedures to address the tear trough and the midcheek.", "title": "" }, { "docid": "d67a2217844cfd2c7a6cbeff5f0e5e98", "text": "Monitoring aquatic environment is of great interest to the ecosystem, marine life, and human health. This paper presents the design and implementation of Samba -- an aquatic surveillance robot that integrates an off-the-shelf Android smartphone and a robotic fish to monitor harmful aquatic processes such as oil spill and harmful algal blooms. Using the built-in camera of on-board smartphone, Samba can detect spatially dispersed aquatic processes in dynamic and complex environments. To reduce the excessive false alarms caused by the non-water area (e.g., trees on the shore), Samba segments the captured images and performs target detection in the identified water area only. However, a major challenge in the design of Samba is the high energy consumption resulted from the continuous image segmentation. We propose a novel approach that leverages the power-efficient inertial sensors on smartphone to assist the image processing. In particular, based on the learned mapping models between inertial and visual features, Samba uses real-time inertial sensor readings to estimate the visual features that guide the image segmentation, significantly reducing energy consumption and computation overhead. Samba also features a set of lightweight and robust computer vision algorithms, which detect harmful aquatic processes based on their distinctive color features. Lastly, Samba employs a feedback-based rotation control algorithm to adapt to spatiotemporal evolution of the target aquatic process. We have implemented a Samba prototype and evaluated it through extensive field experiments, lab experiments, and trace-driven simulations. The results show that Samba can achieve 94% detection rate, 5% false alarm rate, and a lifetime up to nearly two months.", "title": "" }, { "docid": "67995490350c68f286029d8b401d78d8", "text": "OBJECTIVE\nModifiable risk factors for dementia were recently identified and compiled in a systematic review. The 'Lifestyle for Brain Health' (LIBRA) score, reflecting someone's potential for dementia prevention, was studied in a large longitudinal population-based sample with respect to predicting cognitive change over an observation period of up to 16 years.\n\n\nMETHODS\nLifestyle for Brain Health was calculated at baseline for 949 participants aged 50-81 years from the Maastricht Ageing Study. The predictive value of LIBRA for incident dementia and cognitive impairment was examined by using Cox proportional hazard models and by testing its relation with cognitive decline.\n\n\nRESULTS\nLifestyle for Brain Health predicted future risk of dementia, as well as risk of cognitive impairment. A one-point increase in LIBRA score related to 19% higher risk for dementia and 9% higher risk for cognitive impairment. LIBRA predicted rate of decline in processing speed, but not memory or executive functioning.\n\n\nCONCLUSIONS\nLifestyle for Brain Health (LIBRA) may help in identifying and monitoring risk status in dementia-prevention programmes, by targeting modifiable, lifestyle-related risk factors. Copyright © 2017 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "3cbb932e65cf2150cb32aaf930b45492", "text": "In software industries, various open source projects utilize the services of Bug Tracking Systems that let users submit software issues or bugs and allow developers to respond to and fix them. The users label the reports as bugs or any other relevant class. This classification helps to decide which team or personnel would be responsible for dealing with an issue. A major problem here is that users tend to wrongly classify the issues, because of which a middleman called a bug triager is required to resolve any misclassifications. This ensures no time is wasted at the developer end. This approach is very time consuming and therefore it has been of great interest to automate the classification process, not only to speed things up, but to lower the amount of errors as well. In the literature, several approaches including machine learning techniques have been proposed to automate text classification. However, there has not been an extensive comparison on the performance of different natural language classifiers in this field. In this paper we compare general natural language data classifying techniques using five different machine learning algorithms: Naive Bayes, kNN, Pegasos, Rocchio and Perceptron. The performance comparison of these algorithms was done on the basis of their apparent error rates. The data-set involved four different projects, Httpclient, Jackrabbit, Lucene and Tomcat5, that used two different Bug Tracking Systems - Bugzilla and Jira. An experimental comparison of pre-processing techniques was also performed.", "title": "" }, { "docid": "e8a1330f93a701939367bd390e9018c7", "text": "An eccentric paddle locomotion mechanism based on the epicyclic gear mechanism (ePaddle-EGM), which was proposed to enhance the mobility of amphibious robots in multiterrain tasks, can perform various terrestrial and aquatic gaits. Two of the feasible aquatic gaits are the rotational paddling gait and the oscillating paddling gait. The former one has been studied in our previous work, and a capacity of generating vectored thrust has been found. In this letter, we focus on the oscillating paddling gait by measuring the generated thrusts of the gait on an ePaddle-EGM prototype module. Experimental results verify that the oscillating paddling gait can generate vectored thrust by changing the location of the paddle shaft as well. Furthermore, we compare the oscillating paddling gait with the rotational paddling gait at the vectored thrusting property, magnitude of the thrust, and the gait efficiency.", "title": "" }, { "docid": "b08023089abd684d26fabefb038cc9fa", "text": "IMSI catching is a problem on all generations of mobile telecommunication networks, i.e., 2G (GSM, GPRS), 3G (HDSPA, EDGE, UMTS) and 4G (LTE, LTE+). Currently, the SIM card of a mobile phone has to reveal its identity over an insecure plaintext transmission, before encryption is enabled. This identifier (the IMSI) can be intercepted by adversaries that mount a passive or active attack. Such identity exposure attacks are commonly referred to as 'IMSI catching'. Since the IMSI is uniquely identifying, unauthorized exposure can lead to various location privacy attacks. We propose a solution, which essentially replaces the IMSIs with changing pseudonyms that are only identifiable by the home network of the SIM's own network provider. Consequently, these pseudonyms are unlinkable by intermediate network providers and malicious adversaries, and therefore mitigate both passive and active attacks, which we also formally verified using ProVerif. Our solution is compatible with the current specifications of the mobile standards and therefore requires no change in the infrastructure or any of the already massively deployed network equipment. The proposed method only requires limited changes to the SIM and the authentication server, both of which are under control of the user's network provider. Therefore, any individual (virtual) provider that distributes SIM cards and controls its own authentication server can deploy a more privacy friendly mobile network that is resilient against IMSI catching attacks.", "title": "" }, { "docid": "b5d54f10aebd99d898dfb52d75e468e6", "text": "As the technology to secure information improves, hackers will employ less technical means to get access to unauthorized data. The use of Social Engineering as a non tech method of hacking has been increasingly used during the past few years. There are different types of social engineering methods reported but what is lacking is a unifying effort to understand these methods in the aggregate. This paper aims to classify these methods through taxonomy so that organizations can gain a better understanding of these attack methods and accordingly be vigilant against them.", "title": "" }, { "docid": "e5175084f08ad8efc3244f52cbb8ef7b", "text": "We consider a multi-agent framework for distributed optimization where each agent in the network has access to a local convex function and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents in the network. When the local functions are strongly-convex with Lipschitz-continuous gradients, we show that a subsequence of the iterates at each agent converges to a neighbourhood of the global minimum, where the size of the neighbourhood depends on the degree of asynchrony in the multi-agent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that Asynchronous Subgradient-Push can minimize the global objective faster than state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.", "title": "" }, { "docid": "f157b3fb65d4ce1df6d6bb549b020fa0", "text": "We have developed a reversible method to convert color graphics and pictures to gray images. The method is based on mapping colors to low-visibility high-frequency textures that are applied onto the gray image. After receiving a monochrome textured image, the decoder can identify the textures and recover the color information. More specifically, the image is textured by carrying a subband (wavelet) transform and replacing bandpass subbands by the chrominance signals. The low-pass subband is the same as that of the luminance signal. The decoder performs a wavelet transform on the received gray image and recovers the chrominance channels. The intent is to print color images with black and white printers and to be able to recover the color information afterwards. Registration problems are discussed and examples are presented.", "title": "" }, { "docid": "73a62915c29942d2fac0570cac7eb3e0", "text": "In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the networks outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.", "title": "" }, { "docid": "ead7484035be253c2d879992bc7ef632", "text": "Solutions are urgently required for the growing number of infections caused by antibiotic-resistant bacteria. Bacteriocins, which are antimicrobial peptides produced by certain bacteria, might warrant serious consideration as alternatives to traditional antibiotics. These molecules exhibit significant potency against other bacteria (including antibiotic-resistant strains), are stable and can have narrow or broad activity spectra. Bacteriocins can even be produced in situ in the gut by probiotic bacteria to combat intestinal infections. Although the application of specific bacteriocins might be curtailed by the development of resistance, an understanding of the mechanisms by which such resistance could emerge will enable researchers to develop strategies to minimize this potential problem.", "title": "" }, { "docid": "29c7808c8ff3c8babf5785bd9f6e758a", "text": "Robots have become more common in our society as it penetrates the education system as well as in industrial area. More researches have been done on robotics and its application in education area. Are the usage of robots in teaching and learning actually work and effective in Malaysian context? What is the importance of educational robotics in education and what skills will be sharpened in using robotics in education? As programming is vital in educational robotics another issues arise – which programming is suitable for Malaysian schools and how to implement it among the students? As per whole discussion, a new robotic curriculum will be suggested. This paper present a review on educational robotics, its advantages to educational fields, the hardware design and the common programming software used which can be implemented among Malaysian students. The results from the overview will help to spark the interest to not only researchers in the field of human–robot interaction but also administration in educational institutes who wish to understand the wider implications of adopting robots in education.", "title": "" }, { "docid": "7a4f42c389dbca2f3c13469204a22edd", "text": "This article attempts to capture and summarize the known technical information and recommendations for analysis of furan test results. It will also provide the technical basis for continued gathering and evaluation of furan data for liquid power transformers, and provide a recommended structure for collecting that data.", "title": "" }, { "docid": "b271916d455789760d1aa6fda6af85c3", "text": "Over the last decade, automated vehicles have been widely researched and their massive potential has been verified through several milestone demonstrations. However, there are still many challenges ahead. One of the biggest challenges is integrating them into urban environments in which dilemmas occur frequently. Conventional automated driving strategies make automated vehicles foolish in dilemmas such as making lane-change in heavy traffic, handling a yellow traffic light and crossing a double-yellow line to pass an illegally parked car. In this paper, we introduce a novel automated driving strategy that allows automated vehicles to tackle these dilemmas. The key insight behind our automated driving strategy is that expert drivers understand human interactions on the road and comply with mutually-accepted rules, which are learned from countless experiences. In order to teach the driving strategy of expert drivers to automated vehicles, we propose a general learning framework based on maximum entropy inverse reinforcement learning and Gaussian process. Experiments are conducted on a 5.2 km-long campus road at Seoul National University and demonstrate that our framework performs comparably to expert drivers in planning trajectories to handle various dilemmas.", "title": "" }, { "docid": "b11d26222f6ec0f2a1cbbb7389a7eefb", "text": "In this paper, control system is designed to stabilize the camera gimbal system used in different air borne systems for applications such as target tracking, surveillance, aerial photography, autonomous navigation and so on. This camera gimbal system replaces many traditional tracking systems such as radar which are heavy and large to mount on air vehicles. So, the stabilization of camera gimbal is very important to eliminate shakes and vibrations in photography, provides accuracy in tracking moving target and so on. The control system for this gimbal is developed using various control methods and algorithms to provide better and efficient performance with flexibility, accuracy and feasibility. PID controller is designed to control camera gimbal due to its effectiveness, simplicity and feasibility. The tuning parameters of PID controller are tuned using traditional and evolutionary algorithms such as PSO and GA to provide better performance and accuracy in system response. PSO and GA are used due to its dynamic and static performance, computational efficiency and so on. In this paper, performance of system with conventional PID and PSO, GA tuned PID controllers are compared and optimized algorithm is implemented.", "title": "" }, { "docid": "64702593fd9271b7caa4178594f26469", "text": "Microsoft operates the Azure SQL Database (ASD) cloud service, one of the dominant relational cloud database services in the market today. To aid the academic community in their research on designing and efficiently operating cloud database services, Microsoft is introducing the release of production-level telemetry traces from the ASD service. This telemetry data set provides, over a wide set of important hardware resources and counters, the consumption level of each customer database replica. The first release will be a multi-month time-series data set that includes the full cluster traces from two different ASD global regions.", "title": "" } ]
scidocsrr
68833210810dfc3281ef5425598ac855
Erratum : Social Media and Fake News in the 2016 Election
[ { "docid": "940df82b743d99cb3f6dff903920482f", "text": "Online publishing, social networks, and web search have dramatically lowered the costs to produce, distribute, and discover news articles. Some scholars argue that such technological changes increase exposure to diverse perspectives, while others worry they increase ideological segregation. We address the issue by examining web browsing histories for 50,000 U.S.-located users who regularly read online news. We find that social networks and search engines increase the mean ideological distance between individuals. However, somewhat counterintuitively, we also find these same channels increase an individual’s exposure to material from his or her less preferred side of the political spectrum. Finally, we show that the vast majority of online news consumption is accounted for by individuals simply visiting the home pages of their favorite, typically mainstream, news outlets, tempering the consequences—both positive and negative—of recent technological changes. We thus uncover evidence for both sides of the debate, while also finding that the magnitude of the e↵ects are relatively modest. WORD COUNT: 5,762 words", "title": "" } ]
[ { "docid": "be989252cdad4886613f53c7831454cb", "text": "Stress and cortisol are known to impair memory retrieval of well-consolidated declarative material. The effects of cortisol on memory retrieval may in particular be due to glucocorticoid (GC) receptors in the hippocampus and prefrontal cortex (PFC). Therefore, effects of stress and cortisol should be observable on both hippocampal-dependent declarative memory retrieval and PFC-dependent working memory (WM). In the present study, it was tested whether psychosocial stress would impair both WM and memory retrieval in 20 young healthy men. In addition, the association between cortisol levels and cognitive performance was assessed. It was found that stress impaired WM at high loads, but not at low loads in a Sternberg paradigm. High cortisol levels at the time of testing were associated with slow WM performance at high loads, and with impaired recall of moderately emotional, but not of highly emotional paragraphs. Furthermore, performance at high WM loads was associated with memory retrieval. These data extend previous results of pharmacological studies in finding WM impairments after acute stress at high workloads and cortisol-related retrieval impairments.", "title": "" }, { "docid": "eee9b5301c83faf4fe8fd786f0d99efd", "text": "We present a named entity recognition and classification system that uses only probabilistic character-level features. Classifications by multiple orthographic tries are combined in a hidden Markov model framework to incorporate both internal and contextual evidence. As part of the system, we perform a preprocessing stage in which capitalisation is restored to sentence-initial and all-caps words with high accuracy. We report f-values of 86.65 and 79.78 for English, and 50.62 and 54.43 for the German datasets.", "title": "" }, { "docid": "069636576cbf6c5dd8cead8fff32ea4b", "text": "Sleep-disordered breathing-comprising obstructive sleep apnoea (OSA), central sleep apnoea (CSA), or a combination of the two-is found in over half of heart failure (HF) patients and may have harmful effects on cardiac function, with swings in intrathoracic pressure (and therefore preload and afterload), blood pressure, sympathetic activity, and repetitive hypoxaemia. It is associated with reduced health-related quality of life, higher healthcare utilization, and a poor prognosis. Whilst continuous positive airway pressure (CPAP) is the treatment of choice for patients with daytime sleepiness due to OSA, the optimal management of CSA remains uncertain. There is much circumstantial evidence that the treatment of OSA in HF patients with CPAP can improve symptoms, cardiac function, biomarkers of cardiovascular disease, and quality of life, but the quality of evidence for an improvement in mortality is weak. For systolic HF patients with CSA, the CANPAP trial did not demonstrate an overall survival or hospitalization advantage for CPAP. A minute ventilation-targeted positive airway therapy, adaptive servoventilation (ASV), can control CSA and improves several surrogate markers of cardiovascular outcome, but in the recently published SERVE-HF randomized trial, ASV was associated with significantly increased mortality and no improvement in HF hospitalization or quality of life. Further research is needed to clarify the therapeutic rationale for the treatment of CSA in HF. Cardiologists should have a high index of suspicion for sleep-disordered breathing in those with HF, and work closely with sleep physicians to optimize patient management.", "title": "" }, { "docid": "408d3db3b2126990611fdc3a62a985ea", "text": "Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.", "title": "" }, { "docid": "075e99f1041cbd357eff021f3f002c60", "text": "The human finger possesses a structure called the extensor mechanism, a web-like collection of tendinous material that lies on the dorsal side of each finger and connects the controlling muscles to the bones of the finger. In past robotic hand designs, extensor mechanisms have generally not been employed due in part to their complexity and a lack of understanding of their utility. This paper presents our first design and analysis effort of an artificial extensor mechanism. The goal of our analysis is to provide an understanding of the extensor mechanism’s functionality so that we can extract the crucial features that need to be mimicked to construct an anatomical robotic hand. With the inclusion of an extensor mechanism, we believe all possible human finger postures can be achieved using four cable driven actuators. We identified that this extensor mechanism gives independent control of the metacarpo-phalangeal (MCP) joint and acts not only as an extensor but also as a flexor, abductor, adductor, or rotator depending on the finger’s posture.", "title": "" }, { "docid": "acd32a1a25cedbc7e1201a31a1436a2b", "text": "Understanding others' mental states is a crucial skill that enables the complex social relationships that characterize human societies. Yet little research has investigated what fosters this skill, which is known as Theory of Mind (ToM), in adults. We present five experiments showing that reading literary fiction led to better performance on tests of affective ToM (experiments 1 to 5) and cognitive ToM (experiments 4 and 5) compared with reading nonfiction (experiments 1), popular fiction (experiments 2 to 5), or nothing at all (experiments 2 and 5). Specifically, these results show that reading literary fiction temporarily enhances ToM. More broadly, they suggest that ToM may be influenced by engagement with works of art.", "title": "" }, { "docid": "25739e04a42f7309127596846d9eefa3", "text": "We consider a new formulation of abduction. Our formulation differs from the existing approaches in that it does not cast the “plausibility” of explanations in terms of either syntactic minimality or an explicitly given prior distribution. Instead, “plausibility,” along with the rules of the domain, is learned from concrete examples (settings of attributes). Our version of abduction thus falls in the “learning to reason” framework of Khardon and Roth. Such approaches enable us to capture a natural notion of “plausibility” in a domain while avoiding the problem of specifying an explicit representation of what is “plausible,” a task that humans find extremely difficult. In this work, we specifically consider the question of which syntactic classes of formulas have efficient algorithms for abduction. It turns out that while the representation of the query is irrelevant to the computational complexity of our problem, the representation of the explanation critically affects its tractability. We find that the class of k-DNF explanations can be found in polynomial time for any fixed k; but, we also find evidence that even very weak versions of our abduction task are intractable for the usual class of conjunctive explanations. This evidence is provided by a connection to the usual, inductive PAC-learning model proposed by Valiant. We also briefly consider an exception-tolerant variant of abduction. We observe that it is possible for polynomial-time algorithms to tolerate a few adversarially chosen exceptions, again for the class of kDNF explanations. All of the algorithms we study are particularly simple, and indeed are variants of a rule proposed by Mill.", "title": "" }, { "docid": "d5641090db7579faff175e4548c25096", "text": "Integration is central to HIV-1 replication and helps mold the reservoir of cells that persists in AIDS patients. HIV-1 interacts with specific cellular factors to target integration to interior regions of transcriptionally active genes within gene-dense regions of chromatin. The viral capsid interacts with several proteins that are additionally implicated in virus nuclear import, including cleavage and polyadenylation specificity factor 6, to suppress integration into heterochromatin. The viral integrase protein interacts with transcriptional co-activator lens epithelium-derived growth factor p75 to principally position integration within gene bodies. The integrase additionally senses target DNA distortion and nucleotide sequence to help fine-tune the specific phosphodiester bonds that are cleaved at integration sites. Research into virus–host interactions that underlie HIV-1 integration targeting has aided the development of a novel class of integrase inhibitors and may help to improve the safety of viral-based gene therapy vectors.", "title": "" }, { "docid": "a6cb2554774f2453348a133debf72085", "text": "Mobile computing offers potential opportunities for students’ learning. It is important to have an operational understanding of the context in developing a user interface that is both useful and flexible. The author believes that the complexity of the relationships involved can be analysed using activity theory. Activity theory, as a social and cultural psychological theory, can be used to design a mobile learning environment. This paper presents the use of activity theory as a framework for describing the components of an activity system for the design of a context-aware mobile learning application.", "title": "" }, { "docid": "04097beae36a8414cf53d8418db745ab", "text": "Accurate terrain estimation is critical for autonomous offroad navigation. Reconstruction of a 3D surface allows rough and hilly ground to be represented, yielding faster driving and better planning and control. However, data from a 3D sensor samples the terrain unevenly, quickly becoming sparse at longer ranges and containing large voids because of occlusions and inclines. The proposed approach uses online kernel-based learning to estimate a continuous surface over the area of interest while providing upper and lower bounds on that surface. Unlike other approaches, visibility information is exploited to constrain the terrain surface and increase precision, and an efficient gradient-based optimization allows for realtime implementation.", "title": "" }, { "docid": "f31555cb1720843ec4921428dc79449e", "text": "Software architectures shift developers’ focus from lines-of-code to coarser-grained architectural elements and their interconnection structure. Architecture description languages (ADLs) have been proposed as modeling notations to support architecture-based development. There is, however, little consensus in the research community on what is an ADL, what aspects of an architecture should be modeled in an ADL, and which ADL is best suited for a particular problem. Furthermore, the distinction is rarely made between ADLs on one hand and formal specification, module interconnection, simulation, and programming languages on the other. This paper attempts to provide an answer to these questions. It motivates and presents a definition and a classification framework for ADLs. The utility of the definition is demonstrated by using it to differentiate ADLs from other modeling notations. The framework is used to classify and compare several existing ADLs.1", "title": "" }, { "docid": "4388a711aed50f59e15f0295c729ee17", "text": "This work is a combination of physical and analytical considerations of linear stability pictures on time-invariant and time-dependent spatial domains with symmetry. The discussion is offered in the context of the Rayleigh-Taylor instability (of a fluid interface accelerated in the direction of a heavier phase) applied to the drop splash problem, which provides a natural ground for developing stability theory on time-dependent spatial domains with O(2) symmetry. The peculiarity of the underlying linear model common in a number of other interfacial instabilities – linear oscillator ftt + a(k) f = 0 in the wavenumber k-space – allows one to establish a direct correspondence between stability pictures on time-invariant and time-dependent spatial domains. The stability analysis also leads to a notion of frustration in (linear) stability patterns.", "title": "" }, { "docid": "5ce4e0532bf1f6f122f62b37ba61384e", "text": "Media violence poses a threat to public health inasmuch as it leads to an increase in real-world violence and aggression. Research shows that fictional television and film violence contribute to both a short-term and a long-term increase in aggression and violence in young viewers. Television news violence also contributes to increased violence, principally in the form of imitative suicides and acts of aggression. Video games are clearly capable of producing an increase in aggression and violence in the short term, although no long-term longitudinal studies capable of demonstrating long-term effects have been conducted. The relationship between media violence and real-world violence and aggression is moderated by the nature of the media content and characteristics of and social influences on the individual exposed to that content. Still, the average overall size of the effect is large enough to place it in the category of known threats to public health.", "title": "" }, { "docid": "e9b89400c6bed90ac8c9465e047538e7", "text": "Myriad of graph-based algorithms in machine learning and data mining require parsing relational data iteratively. These algorithms are implemented in a large-scale distributed environment to scale to massive data sets. To accelerate these large-scale graph-based iterative computations, we propose delta-based accumulative iterative computation (DAIC). Different from traditional iterative computations, which iteratively update the result based on the result from the previous iteration, DAIC updates the result by accumulating the “changes” between iterations. By DAIC, we can process only the “changes” to avoid the negligible updates. Furthermore, we can perform DAIC asynchronously to bypass the high-cost synchronous barriers in heterogeneous distributed environments. Based on the DAIC model, we design and implement an asynchronous graph processing framework, Maiter. We evaluate Maiter on local cluster as well as on Amazon EC2 Cloud. The results show that Maiter achieves as much as 60 × speedup over Hadoop and outperforms other state-of-the-art frameworks.", "title": "" }, { "docid": "1571fbb923755323e32ac7d023bd1025", "text": "Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both sentence planning and surface realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the experimental results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.", "title": "" }, { "docid": "d565220c9e4b9a4b9f8156434b8b4cd3", "text": "Decision Support Systems (DDS) have developed to exploit Information Technology (IT) to assist decision-makers in a wide variety of fields. The need to use spatial data in many of these diverse fields has led to increasing interest in the development of Spatial Decision Support Systems (SDSS) based around the Geographic Information System (GIS) technology. The paper examines the relationship between SDSS and GIS and suggests that SDSS is poised for further development owing to improvement in technology and the greater availability of spatial data.", "title": "" }, { "docid": "87bded10bc1a29a3c0dead2958defc2e", "text": "Context: Web applications are trusted by billions of users for performing day-to-day activities. Accessibility, availability and omnipresence of web applications have made them a prime target for attackers. A simple implementation flaw in the application could allow an attacker to steal sensitive information and perform adversary actions, and hence it is important to secure web applications from attacks. Defensive mechanisms for securing web applications from the flaws have received attention from both academia and industry. Objective: The objective of this literature review is to summarize the current state of the art for securing web applications from major flaws such as injection and logic flaws. Though different kinds of injection flaws exist, the scope is restricted to SQL Injection (SQLI) and Cross-site scripting (XSS), since they are rated as the top most threats by different security consortiums. Method: The relevant articles recently published are identified from well-known digital libraries, and a total of 86 primary studies are considered. A total of 17 articles related to SQLI, 35 related to XSS and 34 related to logic flaws are discussed. Results: The articles are categorized based on the phase of software development life cycle where the defense mechanism is put into place. Most of the articles focus on detecting the flaws and preventing attacks against web applications. Conclusion: Even though various approaches are available for securing web applications from SQLI and XSS, they are still prevalent due to their impact and severity. Logic flaws are gaining attention of the researchers since they violate the business specifications of applications. There is no single solution to mitigate all the flaws. More research is needed in the area of fixing flaws in the source code of applications.", "title": "" }, { "docid": "64fddaba616a01558f3534ee723883cb", "text": "We demonstrate 70.4 Tb/s transmission over 7,600 km with C+L band EDFAs using coded modulation with hybrid probabilistic and geometrical constellation shaping. We employ multi-stage nonlinearity compensation including DBP, fast LMS equalizer and generalized filter.", "title": "" }, { "docid": "11f404d45daeb02087383b9ea933457c", "text": "Distributed Denial of Service (DDoS) flooding attacks are one of the biggest concerns for security professionals. DDoS flooding attacks are typically explicit attempts to disrupt legitimate users' access to services. Attackers usually gain access to a large number of computers by exploiting their vulnerabilities to set up attack armies (i.e., Botnets). Once an attack army has been set up, an attacker can invoke a coordinated, large-scale attack against one or more targets. Developing a comprehensive defense mechanism against identified and anticipated DDoS flooding attacks is a desired goal of the intrusion detection and prevention research community. However, the development of such a mechanism requires a comprehensive understanding of the problem and the techniques that have been used thus far in preventing, detecting, and responding to various DDoS flooding attacks. In this paper, we explore the scope of the DDoS flooding attack problem and attempts to combat it. We categorize the DDoS flooding attacks and classify existing countermeasures based on where and when they prevent, detect, and respond to the DDoS flooding attacks. Moreover, we highlight the need for a comprehensive distributed and collaborative defense approach. Our primary intention for this work is to stimulate the research community into developing creative, effective, efficient, and comprehensive prevention, detection, and response mechanisms that address the DDoS flooding problem before, during and after an actual attack.", "title": "" }, { "docid": "f9692d0410cb97fd9c2ecf6f7b043b9f", "text": "This paper develops and analyzes four energy scenarios for California that are both exploratory and quantitative. The businessas-usual scenario represents a pathway guided by outcomes and expectations emerging from California’s energy crisis. Three alternative scenarios represent contexts where clean energy plays a greater role in California’s energy system: Split Public is driven by local and individual activities; Golden State gives importance to integrated state planning; Patriotic Energy represents a national drive to increase energy independence. Future energy consumption, composition of electricity generation, energy diversity, and greenhouse gas emissions are analyzed for each scenario through 2035. Energy savings, renewable energy, and transportation activities are identified as promising opportunities for achieving alternative energy pathways in California. A combined approach that brings together individual and community activities with state and national policies leads to the largest energy savings, increases in energy diversity, and reductions in greenhouse gas emissions. Critical challenges in California’s energy pathway over the next decades identified by the scenario analysis include dominance of the transportation sector, dependence on fossil fuels, emissions of greenhouse gases, accounting for electricity imports, and diversity of the electricity sector. The paper concludes with a set of policy lessons revealed from the California energy scenarios. r 2003 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
024ed4a8d9c49fcce2a6685c2cf76cce
Detail-preserved real-time hand motion regression from depth
[ { "docid": "7e68fe5b6a164359d2389f30686ec049", "text": "Tracking the articulated 3D motion of the hand has important applications, for example, in human-computer interaction and teleoperation. We present a novel method that can capture a broad range of articulated hand motions at interactive rates. Our hybrid approach combines, in a voting scheme, a discriminative, part-based pose retrieval method with a generative pose estimation method based on local optimization. Color information from a multi-view RGB camera setup along with a person-specific hand model are used by the generative method to find the pose that best explains the observed images. In parallel, our discriminative pose estimation method uses fingertips detected on depth data to estimate a complete or partial pose of the hand by adopting a part-based pose retrieval strategy. This part-based strategy helps reduce the search space drastically in comparison to a global pose retrieval strategy. Quantitative results show that our method achieves state-of-the-art accuracy on challenging sequences and a near-real time performance of 10 fps on a desktop computer.", "title": "" }, { "docid": "0d13be9f5e2082af96c370d3c316204f", "text": "We present a combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time. Our system uses a single self-contained stereo camera unit built from off-the-shelf components and consumer graphics hardware to generate spatio-temporally coherent 3D models at 30 Hz. A new stereo matching algorithm estimates real-time RGB-D data. We start by scanning a smooth template model of the subject as they move rigidly. This geometric surface prior avoids strong scene assumptions, such as a kinematic human skeleton or a parametric shape model. Next, a novel GPU pipeline performs non-rigid registration of live RGB-D data to the smooth template using an extended non-linear as-rigid-as-possible (ARAP) framework. High-frequency details are fused onto the final mesh using a linear deformation model. The system is an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms. We show precise real-time reconstructions of diverse scenes, including: large deformations of users' heads, hands, and upper bodies; fine-scale wrinkles and folds of skin and clothing; and non-rigid interactions performed by users on flexible objects such as toys. We demonstrate how acquired models can be used for many interactive scenarios, including re-texturing, online performance capture and preview, and real-time shape and motion re-targeting.", "title": "" } ]
[ { "docid": "6bdcac1d424162a89adac7fa2a6221ae", "text": "The growing popularity of online product review forums invites people to express opinions and sentiments toward the products .It gives the knowledge about the product as well as sentiment of people towards the product. These online reviews are very important for forecasting the sales performance of product. In this paper, we discuss the online review mining techniques in movie domain. Sentiment PLSA which is responsible for finding hidden sentiment factors in the reviews and ARSA model used to predict sales performance. An Autoregressive Sentiment and Quality Aware model (ARSQA) also in consideration for to build the quality for predicting sales performance. We propose clustering and classification based algorithm for sentiment analysis.", "title": "" }, { "docid": "90c10466257f8b0c7d3289a319bf0fbe", "text": "This paper describes development of joint materials using only base metals (Cu and Sn) for power semiconductor assembly. The preform sheet of the joint material is made by two kinds of particles such as Cu source and Cu-Sn IMC source. Optimized ratio of Cu source: IMC source provides robust skeleton structure in joint area. The particles' mixture control (Cu density and thickness) affects stress control to eliminate cracks and delamination of the joint area. As evaluation, Thermal Cycling Test (TCT, −40°C∼+200°C, 1,000cycles) of Cu-Cu joint resulted no critical cracks / delamination / voids. We confirmed the material also applicable for attaching SiC die on the DCB substrate on bare Cu heatsink.", "title": "" }, { "docid": "e6f3280bcb98aa0ebcb2a8d6a4bbf528", "text": "OBJECTIVE\nThe purpose of this study was to examine age differences in response to different forms of psychotherapy for chronic pain.\n\n\nMETHODS\nWe performed a secondary analysis of 114 adults (ages 18-89 years) with a variety of chronic, nonmalignant pain conditions randomly assigned to 8 weeks of group-administered acceptance and commitment therapy (ACT) or cognitive behavioral therapy (CBT). Treatment response was defined as a drop of at least three points on the Brief Pain Inventory-interference subscale.\n\n\nRESULTS\nOlder adults were more likely to respond to ACT, and younger adults to CBT, both immediately following treatment and at 6-month follow-up. There were no significant differences in credibility, expectations of positive outcome, attrition, or satisfaction, although there was a trend for the youngest adults (ages 18-45 years) to complete fewer sessions.\n\n\nCONCLUSIONS\nThese data suggest that ACT may be an effective and acceptable treatment for chronic pain in older adults.", "title": "" }, { "docid": "d7711dac4c6c3f1aaed7f77228a2d99d", "text": "In today's teaching and learning approaches for first-semester students, practical courses more and more often complement traditional theoretical lectures. This practical element allows an early insight into the real world of engineering, augments student motivation, and enables students to acquire soft skills early. This paper describes a new freshman introduction course into practical engineering, which has been established within the Bachelor of Science curriculum of Electrical Engineering and Information Technology of RWTH Aachen University, Germany. The course is organized as an eight-day, full-time block laboratory for over 300 freshman students, who were supervised by more than 60 tutors from 23 institutes of the Electrical Engineering Department. Based on a threefold learning concept comprising mathematical methods, MATLAB programming, and practical engineering, the students were required to transfer mathematical basics to algorithms in MATLAB in order to control LEGO Mindstorms robots. Toward this end, a new toolbox, called the ¿RWTH-Mindstorms NXT Toolbox,¿ was developed, which enables the robots to be controlled remotely via MATLAB from a host computer. This paper describes how the laboratory course is organized and how it induces students to think as actual engineers would in solving real-world tasks with limited resources. Evaluation results show that the project improves the students' MATLAB programming skills, enhances motivation, and enables a peer learning process.", "title": "" }, { "docid": "3138b8b0e25cd0675c5611b15f4574d9", "text": "BG is a benchmarkthat ratesa datastorefor processinginteracti ve socialnetworking actionsusing a pre-specifiedservicelevel agreement, SLA. An exampleSLA mayrequire95%of issuedrequeststo observe a responsetime fasterthan100milliseconds.BG computestwo differentratingsnamedSoAR andSocialites. In addition,it elevatestheamountof unpredictabledataproducedby adatastoreto afirst classmetric,includingit asakey component of theSLA andquantifyingit asapartof thebenchmarking process. Onemay useBG for a variety of purposesrangingfrom comparingdifferentdatastoreswith one another , evaluatingalternati ve physicaldataorganizationtechniquesgivena datastore,quantifyingthe performancecharacteristicsof adatastorein thepresenceof failures(eitherCPor AP in CAPtheorem), amongothers.This studyillustratesBG’s first usecase,comparinga documentstorewith anindustrial strengthrelationaldatabasemanagement system(RDBMS)deployedeitherin standalonemodeor augmentedwith memcached.No onesystemis superiorfor all BG actions.However, whenconsideringa mix of actions,thememcachedaugmentedRDBMS produceshigherratings.", "title": "" }, { "docid": "63a75f3eedb1410527eb0645ed9bf79d", "text": "Stiffness following surgery or injury to a joint develops as a progression of four stages: bleeding, edema, granulation tissue, and fibrosis. Continuous passive motion (CPM) properly applied during the first two stages of stiffness acts to pump blood and edema fluid away from the joint and periarticular tissues. This allows maintenance of normal periarticular soft tissue compliance. CPM is thus effective in preventing the development of stiffness if full motion is applied immediately following surgery and continued until swelling that limits the full motion of the joint no longer develops. This concept has been applied successfully to elbow rehabilitation, and explains the controversy surrounding CPM following knee arthroplasty. The application of this concept to clinical practice requires a paradigm shift, resulting in our attention being focused on preventing the initial or delayed accumulation of periarticular interstitial fluids.", "title": "" }, { "docid": "64bd2fc0d1b41574046340833144dabe", "text": "Probe-based confocal laser endomicroscopy (pCLE) provides high-resolution in vivo imaging for intraoperative tissue characterization. Maintaining a desired contact force between target tissue and the pCLE probe is important for image consistency, allowing large area surveillance to be performed. A hand-held instrument that can provide a predetermined contact force to obtain consistent images has been developed. The main components of the instrument include a linear voice coil actuator, a donut load-cell, and a pCLE probe. In this paper, detailed mechanical design of the instrument is presented and system level modeling of closed-loop force control of the actuator is provided. The performance of the instrument has been evaluated in bench tests as well as in hand-held experiments. Results demonstrate that the instrument ensures a consistent predetermined contact force between pCLE probe tip and tissue. Furthermore, it compensates for both simulated physiological movement of the tissue and involuntary movements of the operator's hand. Using pCLE video feature tracking of large colonic crypts within the mucosal surface, the steadiness of the tissue images obtained using the instrument force control is demonstrated by confirming minimal crypt translation.", "title": "" }, { "docid": "8a5ae40bc5921d7614ca34ddf53cebbc", "text": "In natural language processing community, sentiment classification based on insufficient labeled data is a well-known challenging problem. In this paper, a novel semi-supervised learning algorithm called active deep network (ADN) is proposed to address this problem. First, we propose the semi-supervised learning framework of ADN. ADN is constructed by restricted Boltzmann machines (RBM) with unsupervised fine-tuned by gradient-descent based supervised learning with an exponential loss function. Second, in the semi-supervised learning framework, we apply active learning to identify reviews that should be labeled as training data, then using the selected labeled reviews and all unlabeled reviews to train ADN architecture. Moreover, we combine the information density with ADN, and propose information ADN (IADN) method, which can apply the information density of all unlabeled reviews in choosing the manual labeled reviews. Experiments on five sentiment classification datasets show that ADN and IADN outperform classical semi-supervised learning algorithms, and deep learning techniques applied for sentiment classification. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4ad7cf99a6a67748a9cc98b99c12c1b9", "text": "During social interaction humans extract important information from tactile stimuli that can improve their understanding of the interaction. The development of a similar capability in a robot will contribute to the future success of intuitive human–robot interaction. This paper presents a thin, flexible and stretchable artificial skin for robotics based on the principle of electrical impedance tomography. This skin, which can be used to extract information such as location, duration and intensity of touch, was used to cover the forearm and upper arm of a full-size mannequin. A classifier based on the ‘LogitBoost’ algorithm was used to classify the modality of eight different types of touch applied by humans to the mannequin arm. Experiments showed that the modality of touch was correctly classified in approximately 71% of the trials. This was shown to be comparable to the accuracy of humans when identifying touch. The classification accuracies obtained represent significant improvements over previous classification algorithms applied to artificial sensitive skins. It is shown that features based on touch duration and intensity are sufficient to provide a good classification of touch modality. Gender and cultural background were examined and found to have no statistically significant effect on the classification results.", "title": "" }, { "docid": "d62a68d6fcd5c2ae4635709007e471da", "text": "We introduce a new method to combine the output probabilities of convolutional neural networks which we call Weighted Convolutional Neural Network Ensemble. Each network has an associated weight that makes networks with better performance have a greater influence at the time to classify in relation to networks that performed worse. This new approach produces better results than the common method that combines the networks doing just the average of the output probabilities to make the predictions. We show the validity of our proposal by improving the classification rate on a common image classification benchmark.", "title": "" }, { "docid": "8aacdb790ddec13f396a0591c0cd227a", "text": "This paper reports on a qualitative study of journal entries written by students in six health professions participating in the Interprofessional Health Mentors program at the University of British Columbia, Canada. The study examined (1) what health professions students learn about professional language and communication when given the opportunity, in an interprofessional group with a patient or client, to explore the uses, meanings, and effects of common health care terms, and (2) how health professional students write about their experience of discussing common health care terms, and what this reveals about how students see their development of professional discourse and participation in a professional discourse community. Using qualitative thematic analysis to address the first question, the study found that discussion of these health care terms provoked learning and reflection on how words commonly used in one health profession can be understood quite differently in other health professions, as well as on how health professionals' language choices may be perceived by patients and clients. Using discourse analysis to address the second question, the study further found that many of the students emphasized accuracy and certainty in language through clear definitions and intersubjective agreement. However, when prompted by the discussion they were willing to consider other functions and effects of language.", "title": "" }, { "docid": "509658ef2758b5dd01f50e99ffe5ee4b", "text": "The impact of reject brine chemical composition and disposal from inland desalination plants on soil and groundwater in the eastern region of Abu Dhabi Emirate, namely Al Wagan, Al Quaa and Um Al Zumool, was evaluated. Twenty five inland BWRO desalination plants (11 at Al Wagan, 12 at Al Quaa, and 2 at Um Al Zumool) have been investigated. The study indicated that average capacity of these plants varied between 26,400 G/d (99.93 m/d) to 61,000 G/d (230.91 m/d). The recovery rate varied from 60 to 70% and the reject brine accounted for about 30–40% of the total water production. The electrical conductivity of feed water and rejects brine varied from 4.61 to 14.70 and 12.90–30.30 (mS/cm), respectively. The reject brine is disposed directly into surface impoundment (unlined pits) in a permeable soil with low clay content, cation exchange capacity and organic matter content. The groundwater table 1ies at a depth of 100–150 m. The average distance between feed water intake and the disposal site is approximately 5 km. A survey has been conducted to gather basic information, determine the type of chemicals used, and determine if there is any current and previous monitoring program. The chemical compositions of the feed, product, reject, and pond water have been analyzed for major, minor and trace constituents. Most of the water samples (feed, product, reject and pond water) showed the presence of major, minor and trace constituents. Some of these constituents are above the Gulf Cooperation Council (GCC) and Abu-Dhabi National Oil Company (ADNOC) Standards for drinking water and effluents discharged into the desert. Total Petroleum Hydrocarbon (TPH) was also analyzed and found to be present, even in product water samples, in amount that exceed the GCC standards for organic chemical constituents in drinking water (0.01 mg/l).", "title": "" }, { "docid": "9f7aa5978855e173a45d443e46cbf5dd", "text": "Online gaming franchises such as World of Tanks, Defense of the Ancients, and StarCraft have attracted hundreds of millions of users who, apart from playing the game, also socialize with each other through gaming and viewing gamecasts. As a form of User Generated Content (UGC), gamecasts play an important role in user entertainment and gamer education. They deserve the attention of both industrial partners and the academic communities, corresponding to the large amount of revenue involved and the interesting research problems associated with UGC sites and social networks. Although previous work has put much effort into analyzing general UGC sites such as YouTube, relatively little is known about the gamecast sharing sites. In this work, we provide the first comprehensive study of gamecast sharing sites, including commercial streaming-based sites such as Amazon’s Twitch.tv and community-maintained replay-based sites such as WoTreplays. We collect and share a novel dataset on WoTreplays that includes more than 380,000 game replays, shared by more than 60,000 creators with more than 1.9 million gamers. Together with an earlier published dataset on Twitch.tv, we investigate basic characteristics of gamecast sharing sites, and we analyze the activities of their creators and spectators. Among our results, we find that (i) WoTreplays and Twitch.tv are both fast-consumed repositories, with millions of gamecasts being uploaded, viewed, and soon forgotten; (ii) both the gamecasts and the creators exhibit highly skewed popularity, with a significant heavy tail phenomenon; and (iii) the upload and download preferences of creators and spectators are different: while the creators emphasize their individual skills, the spectators appreciate team-wise tactics. Our findings provide important knowledge for infrastructure and service improvement, for example, in the design of proper resource allocation mechanisms that consider future gamecasting and in the tuning of incentive policies that further help player retention.", "title": "" }, { "docid": "e29c44032fd3c6bbf1859c055e4a2bae", "text": "BACKGROUND\nAutism and Williams syndrome (WS) are neuro-developmental disorders associated with distinct social phenotypes. While individuals with autism show a lack of interest in socially important cues, individuals with WS often show increased interest in socially relevant information.\n\n\nMETHODS\nThe current eye-tracking study explores how individuals with WS and autism preferentially attend to social scenes and movie extracts containing human actors and cartoon characters. The proportion of gaze time spent fixating on faces, bodies and the scene background was investigated.\n\n\nRESULTS\nWhile individuals with autism preferentially attended to characters' faces for less time than was typical, individuals with WS attended to the same regions for longer than typical. For individuals with autism atypical gaze behaviours extended across human actor and cartoon images or movies but for WS atypicalities were restricted to human actors.\n\n\nCONCLUSIONS\nThe reported gaze behaviours provide experimental evidence of the divergent social interests associated with autism and WS.", "title": "" }, { "docid": "ca4e2cff91621bca4018ce1eca5450e2", "text": "Decentralized optimization algorithms have received much attention due to the recent advances in network information processing. However, conventional decentralized algorithms based on projected gradient descent are incapable of handling high-dimensional constrained problems, as the projection step becomes computationally prohibitive. To address this problem, this paper adopts a projection-free optimization approach, a.k.a. the Frank–Wolfe (FW) or conditional gradient algorithm. We first develop a decentralized FW (DeFW) algorithm from the classical FW algorithm. The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an <italic>inexact </italic> FW algorithm. Using a diminishing step size rule and letting <inline-formula><tex-math notation=\"LaTeX\">$t$ </tex-math></inline-formula> be the iteration number, we show that the DeFW algorithm's convergence rate is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t)$</tex-math></inline-formula> for convex objectives; is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t^2)$</tex-math></inline-formula> for strongly convex objectives with the optimal solution in the interior of the constraint set; and is <inline-formula> <tex-math notation=\"LaTeX\">${\\mathcal O}(1/\\sqrt{t})$</tex-math></inline-formula> toward a stationary point for smooth but nonconvex objectives. We then show that a consensus-based DeFW algorithm meets the above guarantees with two communication rounds per iteration. We demonstrate the advantages of the proposed DeFW algorithm on low-complexity robust matrix completion and communication efficient sparse learning. Numerical results on synthetic and real data are presented to support our findings.", "title": "" }, { "docid": "f9b110890c90d48b6d2f84aa419c1598", "text": "Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise-minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes, and it could eventually provide a framework to study the behavior of humans and animals as they encounter surprising events.", "title": "" }, { "docid": "d551eda5717671b53afc330ab2188e8d", "text": "Graphs are a powerful representation formalism that can be applied to a variety of aspects related to language processing. We provide an overview of how Natural Language Processing (NLP) problems have been projected into the graph framework, focusing in particular on graph construction – a crucial step in modeling the data to emphasize the phenomena targeted.", "title": "" }, { "docid": "28b1cc95aa385664cacbf20661f5cf56", "text": "Many organizations now emphasize the use of technology that can help them get closer to consumers and build ongoing relationships with them. The ability to compile consumer data profiles has been made even easier with Internet technology. However, it is often assumed that consumers like to believe they can trust a company with their personal details. Lack of trust may cause consumers to have privacy concerns. Addressing such privacy concerns may therefore be crucial to creating stable and ultimately profitable customer relationships. Three specific privacy concerns that have been frequently identified as being of importance to consumers include unauthorized secondary use of data, invasion of privacy, and errors. Results of a survey study indicate that both errors and invasion of privacy have a significant inverse relationship with online purchase behavior. Unauthorized use of secondary data appears to have little impact. Managerial implications include the careful selection of communication channels for maximum impact, the maintenance of discrete “permission-based” contact with consumers, and accurate recording and handling of data.", "title": "" }, { "docid": "0e2d6ebfade09beb448e9c538dadd015", "text": "Matching incomplete or partial fingerprints continues to be an important challenge today, despite the advances made in fingerprint identification techniques. While the introduction of compact silicon chip-based sensors that capture only part of the fingerprint has made this problem important from a commercial perspective, there is also considerable interest in processing partial and latent fingerprints obtained at crime scenes. When the partial print does not include structures such as core and delta, common matching methods based on alignment of singular structures fail. We present an approach that uses localized secondary features derived from relative minutiae information. A flow network-based matching technique is introduced to obtain one-to-one correspondence of secondary features. Our method balances the tradeoffs between maximizing the number of matches and minimizing total feature distance between query and reference fingerprints. A two-hidden-layer fully connected neural network is trained to generate the final similarity score based on minutiae matched in the overlapping areas. Since the minutia-based fingerprint representation is an ANSI-NIST standard [American National Standards Institute, New York, 1993], our approach has the advantage of being directly applicable to existing databases. We present results of testing on FVC2002’s DB1 and DB2 databases. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "45260b1efb4858e231c8c15879db89d1", "text": "Distributed denial-of-service (DDoS) is a rapidly growing problem. The multitude and variety of both the attacks and the defense approaches is overwhelming. This paper presents two taxonomies for classifying attacks and defenses, and thus provides researchers with a better understanding of the problem and the current solution space. The attack classification criteria was selected to highlight commonalities and important features of attack strategies, that define challenges and dictate the design of countermeasures. The defense taxonomy classifies the body of existing DDoS defenses based on their design decisions; it then shows how these decisions dictate the advantages and deficiencies of proposed solutions.", "title": "" } ]
scidocsrr
4c587003ab58730e7dfa82602fcf0664
Graph-Based Named Entity Linking with Wikipedia
[ { "docid": "9d918a69a2be2b66da6ecf1e2d991258", "text": "We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.", "title": "" }, { "docid": "9118de2f5c7deebb9c3c6175c0b507b2", "text": "The integration of facts derived from information extraction systems into existing knowledge bases requires a system to disambiguate entity mentions in the text. This is challenging due to issues such as non-uniform variations in entity names, mention ambiguity, and entities absent from a knowledge base. We present a state of the art system for entity disambiguation that not only addresses these challenges but also scales to knowledge bases with several million entries using very little resources. Further, our approach achieves performance of up to 95% on entities mentioned from newswire and 80% on a public test set that was designed to include challenging queries.", "title": "" } ]
[ { "docid": "eb7c34c4959c39acb18fc5920ff73dba", "text": "Acoustic evidence suggests that contemporary Seoul Korean may be developing a tonal system, which is arising in the context of a nearly completed change in how speakers use voice onset time (VOT) to mark the language’s distinction among tense, lax and aspirated stops.Data from 36 native speakers of varying ages indicate that while VOT for tense stops has not changed since the 1960s, VOT differences between lax and aspirated stops have decreased, in some cases to the point of complete overlap. Concurrently, the mean F0 for words beginning with lax stops is significantly lower than the mean F0 for comparable words beginning with tense or aspirated stops. Hence the underlying contrast between lax and aspirated stops is maintained by younger speakers, but is phonetically manifested in terms of differentiated tonal melodies: laryngeally unmarked (lax) stops trigger the introduction of a default L tone, while laryngeally marked stops (aspirated and tense) introduce H, triggered by a feature specification for [stiff].", "title": "" }, { "docid": "f35f7aab4bf63527abbc3d7f4515b6d2", "text": "The elements of the Hessian matrix consist of the second derivatives of the error measure with respect to the weights and thresholds in the network. They are needed in Bayesian estimation of network regularization parameters, for estimation of error bars on the network outputs, for network pruning algorithms, and for fast retraining of the network following a small change in the training data. In this paper we present an extended backpropagation algorithm that allows all elements of the Hessian matrix to be evaluated exactly for a feedforward network of arbitrary topology. Software implementation of the algorithm is straightforward.", "title": "" }, { "docid": "62029c586f65cb6708255517b485526f", "text": "In this work, SDN has been utilized to alleviate and eliminate the problem of ARP poisoning attack. This attack is the underlying infrastructure for many other network attacks, such as, man in the middle, denial of service and session hijacking. In this paper we propose a new algorithm to resolve the problem of ARP spoofing. The algorithm can be applied in two different scenarios. The two scenarios are based on whether a network host will be assigned a dynamic or a static IP address. We call the first scenario SDN_DYN; the second scenario is called SDN_STA. For the evaluation process, a physical SDN-enabled switch has been utilized with Ryu controller. Our results show that the new algorithm can prevent ARP spoofing and other attacks exploiting it.", "title": "" }, { "docid": "e2280986abcec2d54ea68bd03bfea295", "text": "Image captioning is a challenging task that combines the field of computer vision and natural language processing. A variety of approaches have been proposed to achieve the goal of automatically describing an image, and recurrent neural network (RNN) or long-short term memory (LSTM) based models dominate this field. However, RNNs or LSTMs cannot be calculated in parallel and ignore the underlying hierarchical structure of a sentence. In this paper, we propose a framework that only employs convolutional neural networks (CNNs) to generate captions. Owing to parallel computing, our basic model is around 3× faster than NIC (an LSTM-based model) during training time, while also providing better results. We conduct extensive experiments on MSCOCO and investigate the influence of the model width and depth. Compared with LSTM-based models that apply similar attention mechanisms, our proposed models achieves comparable scores of BLEU-1,2,3,4 and METEOR, and higher scores of CIDEr. We also test our model on the paragraph annotation dataset [22], and get higher CIDEr score compared with hierarchical LSTMs.", "title": "" }, { "docid": "63b983921f19775f4e598b4b2111b084", "text": "This paper deals with the emergence of perceived age discrimination climate on the company level and its performance consequences. In this new approach to the field of diversity research, we investigated (a) the effect of organizational level age diversity on collective perceptions of age discrimination climate that (b) in turn should influence the collective affective commit ment of employees, which is (c) an important trigger for overall company performance. In a large scale study that included 128 companies, a total of 8,651 employees provided data on their perceptions of age discrimination and affective commitment on the company level. Information on firm level performance was collected from key informants. We tested the proposed model using structural equation modeling (SEM) procedures and, overall, found support for all hypothesized relationships. The findings demonstrated that age diversity seems to be related to the emergence of an age discrimination climate in companies, which negatively impacts overall firm performance through the mediation of affective commitment. These results make valuable contributions to the diversity and discrimination literature by establish ing perceived age discrimination on the company level as a decisive mediator in the age diversity/performance link. The results also suggest important practical implications for the effective management of an increasingly age diverse workforce. Copyright # 2010 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "2b310a05b6a0c0fae45a2e15f8d52101", "text": "Cyber threats and the field of computer cyber defense are gaining more and more an increased importance in our lives. Starting from our regular personal computers and ending with thin clients such as netbooks or smartphones we find ourselves bombarded with constant malware attacks. In this paper we will present a new and novel way in which we can detect these kind of attacks by using elements of modern game theory. We will present the effects and benefits of game theory and we will talk about a defense exercise model that can be used to train cyber response specialists.", "title": "" }, { "docid": "57a2ef4a644f0fc385185a381f309fcd", "text": "Despite recent emergence of adversarial based methods for video prediction, existing algorithms often produce unsatisfied results in image regions with rich structural information (i.e., object boundary) and detailed motion (i.e., articulated body movement). To this end, we present a structure preserving video prediction framework to explicitly address above issues and enhance video prediction quality. On one hand, our framework contains a two-stream generation architecture which deals with high frequency video content (i.e., detailed object or articulated motion structure) and low frequency video content (i.e., location or moving directions) in two separate streams. On the other hand, we propose a RNN structure for video prediction, which employs temporal-adaptive convolutional kernels to capture time-varying motion patterns as well as tiny objects within a scene. Extensive experiments on diverse scenes, ranging from human motion to semantic layout prediction, demonstrate the effectiveness of the proposed video prediction approach.", "title": "" }, { "docid": "4f50fb108ba0e42ef1e61d00f847f3bf", "text": "This paper describes the use of decision tree and rule induction in data-mining applications. Of methods for classification and regression that have been developed in the fields of pattern recognition, statistics, and machine learning, these are of particular interest for data mining since they utilize symbolic and interpretable representations. Symbolic solutions can provide a high degree of insight into the decision boundaries that exist in the data, and the logic underlying them. This aspect makes these predictive-mining techniques particularly attractive in commercial and industrial data-mining applications. We present here a synopsis of some major state-of-the-art tree and rule mining methodologies, as well as some recent advances.", "title": "" }, { "docid": "d1525fdab295a16d5610210e80fb8104", "text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.", "title": "" }, { "docid": "1e1706e1bd58a562a43cc7719f433f4f", "text": "In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.", "title": "" }, { "docid": "8e5c07dc210a75619414130913030985", "text": "Flexible and stretchable electronics and optoelectronics configured in soft, water resistant formats uniquely address seminal challenges in biomedicine. Over the past decade, there has been enormous progress in the materials, designs, and manufacturing processes for flexible/stretchable system subcomponents, including transistors, amplifiers, bio-sensors, actuators, light emitting diodes, photodetector arrays, photovoltaics, energy storage elements, and bare die integrated circuits. Nanomaterials prepared using top-down processing approaches and synthesis-based bottom-up methods have helped resolve the intrinsic mechanical mismatch between rigid/planar devices and soft/curvilinear biological structures, thereby enabling a broad range of non-invasive, minimally invasive, and implantable systems to address challenges in biomedicine. Integration of therapeutic functional nanomaterials with soft bioelectronics demonstrates therapeutics in combination with unconventional diagnostics capabilities. Recent advances in soft materials, devices, and integrated systems are reviewes, with representative examples that highlight the utility of soft bioelectronics for advanced medical diagnostics and therapies.", "title": "" }, { "docid": "0cf1f63fd39c8c74465fad866958dac6", "text": "Software development organizations that have been employing capability maturity models, such as SW-CMM or CMMI for improving their processes are now increasingly interested in the possibility of adopting agile development methods. In the context of project management, what can we say about Scrum’s alignment with CMMI? The aim of our paper is to present the mapping between CMMI and the agile method Scrum, showing major gaps between them and identifying how organizations are adopting complementary practices in their projects to make these two approaches more compliant. This is useful for organizations that have a plan-driven process based on the CMMI model and are planning to improve the agility of processes or to help organizations to define a new project management framework based on both CMMI and Scrum practices.", "title": "" }, { "docid": "4eb27527c174bf7a31887a88f48ee423", "text": "Because of the increasing portability and wearability of noninvasive electrophysiological systems that record and process electrical signals from the human brain, automated systems for assessing changes in user cognitive state, intent, and response to events are of increasing interest. Brain-computer interface (BCI) systems can make use of such knowledge to deliver relevant feedback to the user or to an observer, or within a human-machine system to increase safety and enhance overall performance. Building robust and useful BCI models from accumulated biological knowledge and available data is a major challenge, as are technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may in the future be increasingly ubiquitous. While performance of current BCI modeling methods is slowly increasing, current performance levels do not yet support widespread uses. Here we discuss the current neuroscientific questions and data processing challenges facing BCI designers and outline some promising current and future directions to address them.", "title": "" }, { "docid": "7d4d0e4d99b5dfe675f5f4eff5e5679f", "text": "Remote work and intensive use of Information Technologies (IT) are increasingly common in organizations. At the same time, professional stress seems to develop. However, IS research has paid little attention to the relationships between these two phenomena. The purpose of this research in progress is to present a framework that introduces the influence of (1) new spatial and temporal constraints and of (2) intensive use of IT on employee emotions at work. Specifically, this paper relies on virtuality (e.g. Chudoba et al. 2005) and media richness (Daft and Lengel 1984) theories to determine the emotional consequences of geographically distributed work.", "title": "" }, { "docid": "2000c393acd11a31331d234fb56b8abd", "text": "This letter reports the fabrication of a GaN heterostructure field-effect transistor with oxide spacer placed on the mesa sidewalls. The presence of an oxide spacer effectively eliminates the gate leakage current that occurs at the channel edge, where the gate metal is in contact with the 2-D electron gas edge on the mesa sidewall. From the two-terminal gate leakage current measurements, the leakage current was found to be several nA at VG=-12 V and at VG=-450 V. The benefits of the proposed spacer scheme include the patterning of the metal electrodes by plasma etching and a lower manufacturing cost.", "title": "" }, { "docid": "efd1e2aa69306bde416065547585813b", "text": "Numerous approaches based on metrics, token sequence pattern-matching, abstract syntax tree (AST) or program dependency graph (PDG) analysis have already been proposed to highlight similarities in source code: in this paper we present a simple and scalable architecture based on AST fingerprinting. Thanks to a study of several hashing strategies reducing false-positive collisions, we propose a framework that efficiently indexes AST representations in a database, that quickly detects exact (w.r.t source code abstraction) clone clusters and that easily retrieves their corresponding ASTs. Our aim is to allow further processing of neighboring exact matches in order to identify the larger approximate matches, dealing with the common modification patterns seen in the intra-project copy-pastes and in the plagiarism cases.", "title": "" }, { "docid": "35a063ab339f32326547cc54bee334be", "text": "We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks.", "title": "" }, { "docid": "be3204a5a4430cc3150bf0368a972e38", "text": "Deep learning has exploded in the public consciousness, primarily as predictive and analytical products suffuse our world, in the form of numerous human-centered smart-world systems, including targeted advertisements, natural language assistants and interpreters, and prototype self-driving vehicle systems. Yet to most, the underlying mechanisms that enable such human-centered smart products remain obscure. In contrast, researchers across disciplines have been incorporating deep learning into their research to solve problems that could not have been approached before. In this paper, we seek to provide a thorough investigation of deep learning in its applications and mechanisms. Specifically, as a categorical collection of state of the art in deep learning research, we hope to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems. Furthermore, we hope to outline recent key advancements in the technology, and provide insight into areas, in which deep learning can improve investigation, as well as highlight new areas of research that have yet to see the application of deep learning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.", "title": "" }, { "docid": "a00f39476d72dfd7e244c3588ced3ca5", "text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract This paper holds a survey on leaf disease detection using various image processing technique. Digital image processing is fast, reliable and accurate technique for detection of diseases also various algorithms can be used for identification and classification of leaf diseases in plant. This paper presents techniques used by different author to identify disease such as clustering method, color base image analysis method, classifier and artificial neural network for classification of diseases. The main focus of our work is on the analysis of different leaf disease detection techniques and also provides an overview of different image processing techniques.", "title": "" }, { "docid": "b1b6e670f21479956d2bbe281c6ff556", "text": "Near real-time data from the MODIS satellite sensor was used to detect and trace a harmful algal bloom (HAB), or red tide, in SW Florida coastal waters from October to December 2004. MODIS fluorescence line height (FLH in W m 2 Am 1 sr ) data showed the highest correlation with near-concurrent in situ chlorophyll-a concentration (Chl in mg m ). For Chl ranging between 0.4 to 4 mg m 3 the ratio between MODIS FLH and in situ Chl is about 0.1 W m 2 Am 1 sr 1 per mg m 3 chlorophyll (Chl=1.255 (FLH 10), r =0.92, n =77). In contrast, the band-ratio chlorophyll product of either MODIS or SeaWiFS in this complex coastal environment provided false information. Errors in the satellite Chl data can be both negative and positive (3–15 times higher than in situ Chl) and these data are often inconsistent either spatially or temporally, due to interferences of other water constituents. The red tide that formed from November to December 2004 off SW Florida was revealed by MODIS FLH imagery, and was confirmed by field sampling to contain medium (10 to 10 cells L ) to high (>10 cells L ) concentrations of the toxic dinoflagellate Karenia brevis. The FLH imagery also showed that the bloom started in midOctober south of Charlotte Harbor, and that it developed and moved to the south and southwest in the subsequent weeks. Despite some artifacts in the data and uncertainty caused by factors such as unknown fluorescence efficiency, our results show that the MODIS FLH data provide an unprecedented tool for research and managers to study and monitor algal blooms in coastal environments. D 2005 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
d055c7f0733ed705bfb578f4c0ee53df
Competitive Strategies for Brick-and-Mortar Stores to Counter "Showrooming"
[ { "docid": "231a4c5c5ef010300422b3cab8105290", "text": "There have been many claims that the Internet represents a new nearly “frictionless market.” Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products—books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9–16% lower than prices in conventional outlets, depending on whether taxes, shipping, and shopping costs are included in the price. Additionally, we find that Internet retailers’ price adjustments over time are up to 100 times smaller than conventional retailers’ price adjustments—presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers. (Search; Competition; Internet; Price Dispersion; Menu Costs; Pricing; Intermediaries)", "title": "" }, { "docid": "50beb6d7c0581bf842b47008d2d981f2", "text": "O paper shows that the parameters in existing theoretical models of channel substitution such as offline transportation cost, online disutility cost, and the prices of online and offline retailers interact to determine consumer choice of channels. In this way, our results provide empirical support for many such models. In particular, we empirically examine the trade-off between the benefits of buying online and the benefits of buying in a local retail store. How does a consumer’s physical location shape the relative benefits of buying from the online world? We explore this problem using data from Amazon.com on the top-selling books for 1,497 unique locations in the United States for 10 months ending in January 2006. We show that when a store opens locally, people substitute away from online purchasing, even controlling for product-specific preferences by location. These estimates are economically large, suggesting that the disutility costs of purchasing online are substantial and that offline transportation costs matter. We also show that offline entry decreases consumers’ sensitivity to online price discounts. However, we find no consistent evidence that the breadth of the product line at a local retail store affects purchases.", "title": "" } ]
[ { "docid": "e494bd8d686605cdf10067781a8f36c9", "text": "The purpose of this paper is to examine the role of two basic types of learning in contemporary organizations – incremental (knowledge exploitation) and radical learning (knowledge exploration) – in making organization’s strategic decisions. In achieving this goal a conceptual model of influence of learning types on the nature of strategic decision making and their outcomes was formed, on the basis of which the empirical research was conducted, encompassing 54 top managers in large Croatian companies. The paper discusses the nature of organizational learning and decision making at strategic management level. The results obtained are suggesting that there is a relationship between managers' learning type and decision making approaches at strategic management level, as well as there is the interdependence between these two processes with strategic decision making outcomes. Within these results there are interesting insights, such as that the effect of radical learning on analytical decision making approach is significantly weaker and narrower when compared to the effect of incremental learning on the same approach, and that analytical decision making approach does not affect strategic decision making outcomes.", "title": "" }, { "docid": "4d522957bbcb6e6dcebe2adfbe5262a7", "text": "Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a = ( u , v ) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages.", "title": "" }, { "docid": "071136d78ce8e3001e4b1bb47dc43d48", "text": "Graphene-enabled wireless communications constitute a novel paradigm which has been proposed to implement wireless communications among nanosystems. Indeed, graphene-based plasmonic nano-antennas, or graphennas, just a few micrometers in size have been predicted to radiate electromagnetic waves at the terahertz band. In this work, the important role of the graphene conductivity in the characteristics of graphennas is analyzed, and their radiation performance both in transmission and reception is numerically studied. The resonance frequency of graphennas is calculated as a function of their length and width, both analytically and by simulation. Moreover, the influence of a dielectric substrate with a variable size, and the position of the patch with respect to the substrate is also evaluated. Further, important properties of graphene, such as its chemical potential or its relaxation time, are found to have a profound impact in the radiation properties of graphennas. Finally, the radiation pattern of a graphenna is compared to that of an equivalent metallic antenna. These results will prove useful for designers of future graphennas, which are expected to enable wireless communications", "title": "" }, { "docid": "e4e187d6f6d920d3a8e18f8b529bfb23", "text": "Deep hierarchical reinforcement learning has gained a lot of attention in recent years due to its ability to produce state-of-the-art results in challenging environments where non-hierarchical frameworks fail to learn useful policies. However, as problem domains become more complex, deep hierarchical reinforcement learning can become inefficient, leading to longer convergence times and poor performance. We introduce the Deep Nested Agent framework, which is a variant of deep hierarchical reinforcement learning where information from the main agent is propagated to the low level nested agent by incorporating this information into the nested agent’s state. We demonstrate the effectiveness and performance of the Deep Nested Agent framework by applying it to three scenarios in Minecraft with comparisons to a deep non-hierarchical single agent framework, as well as, a deep hierarchical framework.", "title": "" }, { "docid": "a097f893446a9cc019878909975f5409", "text": "Monocular vision is frequently used in Micro Air Vehicles for many tasks such autonomous navigation, tracking, search and autonomous landing. To address this problem and in the context of autonomous landing of a MAV on a platform, we use a template-based matching in an image pyramid scheme in combination with an edge detector. Thus, the landing zone is localised via image processing in a frame-to-frame basis. Images are captured by the MAV's onboard camera of the MAV and processed with a multi-scale image processing strategy to detect the landing zone at different scales. We assessed our approach in real-time experiments using a Parrot Bebop 2.0 in outdoors and at different heights.", "title": "" }, { "docid": "751843f6085ba854dc75d9a6828bed13", "text": "With the developments in information technology and improvements in communication channels, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS terminals through Internet or mail orders. As a result, fraud detection is the essential tool and probably the best way to stop such fraud types. In this study, classification models based on Artificial Neural Networks (ANN) and Logistic Regression (LR) are developed and applied on credit card fraud detection problem. This study is one of the firsts to compare the performance of ANN and LR methods in credit card fraud detection with a real data set.", "title": "" }, { "docid": "54ba46965571a60e073dfab95ede656e", "text": "ÐThis paper presents a fair decentralized mutual exclusion algorithm for distributed systems in which processes communicate by asynchronous message passing. The algorithm requires between N ÿ 1 and 2…N ÿ 1† messages per critical section access, where N is the number of processes in the system. The exact message complexity can be expressed as a deterministic function of concurrency in the computation. The algorithm does not introduce any other overheads over Lamport's and RicartAgrawala's algorithms, which require 3…N ÿ 1† and 2…N ÿ 1† messages, respectively, per critical section access and are the only other decentralized algorithms that allow mutual exclusion access in the order of the timestamps of requests. Index TermsÐAlgorithm, concurrency, distributed system, fairness, mutual exclusion, synchronization.", "title": "" }, { "docid": "b87be040dae4d38538159876e01f310b", "text": "We present data from detailed observations of CityWall, a large multi-touch display installed in a central location in Helsinki, Finland. During eight days of installation, 1199 persons interacted with the system in various social configurations. Videos of these encounters were examined qualitatively as well as quantitatively based on human coding of events. The data convey phenomena that arise uniquely in public use: crowding, massively parallel interaction, teamwork, games, negotiations of transitions and handovers, conflict management, gestures and overt remarks to co-present people, and \"marking\" the display for others. We analyze how public availability is achieved through social learning and negotiation, why interaction becomes performative and, finally, how the display restructures the public space. The multi-touch feature, gesture-based interaction, and the physical display size contributed differentially to these uses. Our findings on the social organization of the use of public displays can be useful for designing such systems for urban environments.", "title": "" }, { "docid": "8ba7438bb5def91fb7d0c1d59e4bb7c4", "text": "In the almost twenty years since Vasarhelyi and Halper (1991) reported on their pioneering implementation of what has come to be known as Continuous Auditing (CA), the concept has increasingly moved from theory into practice. A 2006 survey by PricewaterhouseCoopers shows that half of all responding firms use some sort of CA techniques, and the majority of the rest plan to do so in the near future. CA not only has an increasing impact on auditing practice, but is also one of the rare instances in which such a significant change was led by the researchers. In this paper we survey the state of CA after two decades of research into continuous auditing theory and practice, and draw out the lessons learned by us in recent pilot CA projects at two major firms, to examine where this unique partnership between academics and auditors will take CA in the future.", "title": "" }, { "docid": "c50e71e3ae5abfffa277f50383c8469e", "text": "Results from recent studies of retrograde amnesia following damage to the hippocampal complex of human and non-human subjects have shown that retrograde amnesia is extensive and can encompass much of a subject's lifetime; the degree of loss may depend upon the type of memory assessed. These and other findings suggest that the hippocampal formation and related structures are involved in certain forms of memory (e.g. autobiographical episodic and spatial memory) for as long as they exist and contribute to the transformation and stabilization of other forms of memory stored elsewhere in the brain.", "title": "" }, { "docid": "b54a2d0350ceac52ed92565af267b6e2", "text": "In this paper, we address the problem of classifying image sets for face recognition, where each set contains images belonging to the same subject and typically covering large variations. By modeling each image set as a manifold, we formulate the problem as the computation of the distance between two manifolds, called manifold-manifold distance (MMD). Since an image set can come in three pattern levels, point, subspace, and manifold, we systematically study the distance among the three levels and formulate them in a general multilevel MMD framework. Specifically, we express a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrate the distances between pairs of subspaces from one of the involved manifolds. We theoretically and experimentally study several configurations of the ingredients of MMD. The proposed method is applied to the task of face recognition with image sets, where identification is achieved by seeking the minimum MMD from the probe to the gallery of image sets. Our experiments demonstrate that, as a general set similarity measure, MMD consistently outperforms other competing nondiscriminative methods and is also promisingly comparable to the state-of-the-art discriminative methods.", "title": "" }, { "docid": "ca9a7a1f7be7d494f6c0e3e4bb408a95", "text": "An enduring and richly elaborated dichotomy in cognitive neuroscience is that of reflective versus reflexive decision making and choice. Other literatures refer to the two ends of what is likely to be a spectrum with terms such as goal-directed versus habitual, model-based versus model-free or prospective versus retrospective. One of the most rigorous traditions of experimental work in the field started with studies in rodents and graduated via human versions and enrichments of those experiments to a current state in which new paradigms are probing and challenging the very heart of the distinction. We review four generations of work in this tradition and provide pointers to the forefront of the field's fifth generation.", "title": "" }, { "docid": "852b4c7b434937299a82c4b8aa3f264e", "text": "Baer's review (2003; this issue) suggests that mindfulness-based interventions are clinically efficacious, but that better designed studies are now needed to substantiate the field and place it on a firm foundation for future growth. Her review, coupled with other lines of evidence, suggests that interest in incorporating mindfulness into clinical interventions in medicine and psychology is growing. It is thus important that professionals coming to this field understand some of the unique factors associated with the delivery of mindfulness-based interventions and the potential conceptual and practical pitfalls of not recognizing the features of this broadly unfamiliar landscape. This commentary highlights and contextualizes (1) what exactly mindfulness is, (2) where it came from, (3) how it came to be introduced into medicine and health care, (4) issues of cross-cultural sensitivity and understanding in the study of meditative practices stemming from other cultures and in applications of them in novel settings, (5) why it is important for people who are teaching mind-fulness to practice themselves, (6) results from 3 recent Health Care, and Society not reviewed by Baer but which raise a number of key questions about clinical applicability , study design, and mechanism of action, and (7) current opportunities for professional training and development in mindfulness and its clinical applications. Iappreciate the opportunity to comment on Baer's (2003; this issue) review of mindfulness training as clinical intervention and to add my own reflections on the emergence of mindfulness in a clinical context, especially in a journal explicitly devoted to both science and practice. The universe of mindfulness 1 brings with it a whole new meaning and thrust to the word practice, one which I believe has the potential to contribute profoundly to the further development of the field of clinical psychology and its allied disciplines , behavioral medicine, psychosomatic medicine, and health psychology, through both a broadening of research approaches to mind/body interactions and the development of new classes of clinical interventions. I find the Baer review to be evenhanded, cogent, and perceptive in its description and evaluation of the work that has been published through the middle of 2001, work that features mindfulness training as the primary element in various clinical interventions. It complements nicely the recent review by Bishop (2002), which to my mind ignores some of the most important, if difficult to define, features of such interventions in its emphasis on the perceived need", "title": "" }, { "docid": "2223620ed94e31dc4969705c290aa6fc", "text": "Text detection in images or videos is an important step to achieve multimedia content retrieval. In this paper, an efficient algorithm which can automatically detect, localize and extract horizontally aligned text in images (and digital videos) with complex backgrounds is presented. The proposed approach is based on the application of a color reduction technique, a method for edge detection, and the localization of text regions using projection profile analyses and geometrical properties. The output of the algorithm are text boxes with a simplified background, ready to be fed into an OCR engine for subsequent character recognition. Our proposal is robust with respect to different font sizes, font colors, languages and background complexities. The performance of the approach is demonstrated by presenting promising experimental results for a set of images taken from different types of video sequences.", "title": "" }, { "docid": "848aae58854681e75fae293e2f8d2fc5", "text": "Over last several decades, computer vision researchers have been devoted to find good feature to solve different tasks, such as object recognition, object detection, object segmentation, activity recognition and so forth. Ideal features transform raw pixel intensity values to a representation in which these computer vision problems are easier to solve. Recently, deep features from covolutional neural network(CNN) have attracted many researchers in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function. More recently, the feature learned from large scale image dataset have been proved to be very effective and generic for many computer vision task. The feature learned from recognition task can be used in the object detection task. This work uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. We begin by summarize some related prior works, particularly the paper in object recognition, object detection and segmentation. We introduce the deep feature to computer vision task in intelligent transportation system. We apply deep feature in object detection task, especially in vehicle detection task. To make fully use of objectness proposals, we apply proposal generator on road marking detection and recognition task. Third, to fully understand the transportation situation, we introduce the deep feature into scene understanding. We experiment each task for different public datasets, and prove our framework is robust.", "title": "" }, { "docid": "db897ae99b6e8d2fc72e7d230f36b661", "text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.", "title": "" }, { "docid": "cf460c614c64b9fb69d5d56e40f2b6ba", "text": "Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/). Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from http://evexdb.org/download/, under the Creative Commons - Attribution - Share Alike (CC BY-SA) license.", "title": "" }, { "docid": "a830d1d83361c3432cd02c4bd0d57956", "text": "Recent fMRI evidence has detected increased medial prefrontal activation during contemplation of personal moral dilemmas compared to impersonal ones, which suggests that this cortical region plays a role in personal moral judgment. However, functional imaging results cannot definitively establish that a brain area is necessary for a particular cognitive process. This requires evidence from lesion techniques, such as studies of human patients with focal brain damage. Here, we tested 7 patients with lesions in the ventromedial prefrontal cortex and 12 healthy individuals in personal moral dilemmas, impersonal moral dilemmas and non-moral dilemmas. Compared to normal controls, patients were more willing to judge personal moral violations as acceptable behaviors in personal moral dilemmas, and they did so more quickly. In contrast, their performance in impersonal and non-moral dilemmas was comparable to that of controls. These results indicate that the ventromedial prefrontal cortex is necessary to oppose personal moral violations, possibly by mediating anticipatory, self-focused, emotional reactions that may exert strong influence on moral choice and behavior.", "title": "" }, { "docid": "27596e0ce483228e86279a9394d389c7", "text": "In the first decade of neurocognitive word production research the predominant approach was brain mapping, i.e., investigating the regional cerebral brain activation patterns correlated with word production tasks, such as picture naming and word generation. Indefrey and Levelt (2004) conducted a comprehensive meta-analysis of word production studies that used this approach and combined the resulting spatial information on neural correlates of component processes of word production with information on the time course of word production provided by behavioral and electromagnetic studies. In recent years, neurocognitive word production research has seen a major change toward a hypothesis-testing approach. This approach is characterized by the design of experimental variables modulating single component processes of word production and testing for predicted effects on spatial or temporal neurocognitive signatures of these components. This change was accompanied by the development of a broader spectrum of measurement and analysis techniques. The article reviews the findings of recent studies using the new approach. The time course assumptions of Indefrey and Levelt (2004) have largely been confirmed requiring only minor adaptations. Adaptations of the brain structure/function relationships proposed by Indefrey and Levelt (2004) include the precise role of subregions of the left inferior frontal gyrus as well as a probable, yet to date unclear role of the inferior parietal cortex in word production.", "title": "" } ]
scidocsrr
435e94cf0155c31b18160bbb54ca9437
Coaching mothers of children with autism: a qualitative study for occupational therapy practice.
[ { "docid": "88c830076b8743f25a7849d5f5e71295", "text": "Occupational therapy practitioners are among the professionals who provide services to children and adults with autism spectrum disorder (ASD), embracing both leadership and supportive roles in service delivery. The study's primary aims were as follows: (1) to identify, evaluate, and synthesize the research literature on interventions for ASD of relevance to occupational therapy and (2) to interpret and apply the research literature to occupational therapy. A total of 49 articles met the authors' criteria and were included in the review. Six categories of research topics were identified, the first 3 of which are most closely related to occupational therapy: (1) sensory integration and sensory-based interventions; (2) relationship-based, interactive interventions; (3) developmental skill-based programs; (4) social cognitive skill training; (5) parent-directed or parent-mediated approaches; and (6) intensive behavioral intervention. Under each category, themes supported by research evidence and applicable to occupational therapy were defined. The findings have implications for intervention methods, communication regarding efficacious practices to professionals and consumers, and future occupational therapy research.", "title": "" } ]
[ { "docid": "93e2a4357573c446b2747f7b21d9d443", "text": "Social Network Systems pioneer a paradigm of access control that is distinct from traditional approaches to access control. Gates coined the term Relationship-Based Access Control (ReBAC) to refer to this paradigm. ReBAC is characterized by the explicit tracking of interpersonal relationships between users, and the expression of access control policies in terms of these relationships. This work explores what it takes to widen the applicability of ReBAC to application domains other than social computing. To this end, we formulate an archetypical ReBAC model to capture the essence of the paradigm, that is, authorization decisions are based on the relationship between the resource owner and the resource accessor in a social network maintained by the protection system. A novelty of the model is that it captures the contextual nature of relationships. We devise a policy language, based on modal logic, for composing access control policies that support delegation of trust. We use a case study in the domain of Electronic Health Records to demonstrate the utility of our model and its policy language. This work provides initial evidence to the feasibility and utility of ReBAC as a general-purpose paradigm of access control.", "title": "" }, { "docid": "51bd82a4393105ed63a188b2dd54956b", "text": "Although perceived continuity with one's future self has attracted increasing research interest, age differences in this phenomenon remain poorly understood. The present study is the first to simultaneously examine past and future self-continuity across multiple temporal distances using both explicit and implicit measures and controlling for a range of theoretically implicated covariates in an adult life span sample (N = 91, aged 18-92, M = 50.15, SD = 19.20, 56% female). Perceived similarity to one's self across 6 past and 6 future time points (1 month to 10 years) was assessed with an explicit self-report measure and an implicit me/not me trait rating task. In multilevel analyses, age was significantly associated with greater implicit and explicit self-continuity, especially for more distant intervals. Further, reaction times (RTs) in the implicit task remained stable with temporal distance for older adults but decreased with temporal distance for younger adults, especially for future ratings. This points toward age differences in the underlying mechanisms of self-continuity. Multilevel models examined the role of various covariates including personality, cognition, future horizons, and subjective health and found that none of them could fully account for the observed age effects. Taken together, our findings suggest that chronological age is associated with greater self-continuity although specific mechanisms and correlates may vary by age. (PsycINFO Database Record", "title": "" }, { "docid": "24e2efc78dc8ffd57f25744ac7532807", "text": "In this paper, we address the problem of outdoor, appearance-based topological localization, particularly over long periods of time where seasonal changes alter the appearance of the environment. We investigate a straight-forward method that relies on local image features to compare single image pairs. We first look into which of the dominating image feature algorithms, SIFT or the more recent SURF, that is most suitable for this task. We then fine-tune our localization algorithm in terms of accuracy, and also introduce the epipolar constraint to further improve the result. The final localization algorithm is applied on multiple data sets, each consisting of a large number of panoramic images, which have been acquired over a period of nine months with large seasonal changes. The final localization rate in the single-image matching, cross-seasonal case is between 80 to 95%.", "title": "" }, { "docid": "1af3be5ed92448095c8a82738e003855", "text": "OBJECTIVE\nThe aim of this review is to identify, critically evaluate, and summarize the laughter literature across a number of fields related to medicine and health care to assess to what extent laughter health-related benefits are currently supported by empirical evidence.\n\n\nDATA SOURCES AND STUDY SELECTION\nA comprehensive laughter literature search was performed. A thorough search of the gray literature was also undertaken. A list of inclusion and exclusion criteria was identified.\n\n\nDATA EXTRACTION\nIt was necessary to distinguish between humor and laughter to assess health-related outcomes elicited by laughter only.\n\n\nDATA SYNTHESIS\nThematic analysis was applied to summarize laughter health-related outcomes, relationships, and general robustness.\n\n\nCONCLUSIONS\nLaughter has shown physiological, psychological, social, spiritual, and quality-of-life benefits. Adverse effects are very limited, and laughter is practically lacking in contraindications. Therapeutic efficacy of laughter is mainly derived from spontaneous laughter (triggered by external stimuli or positive emotions) and self-induced laughter (triggered by oneself at will), both occurring with or without humor. The brain is not able to distinguish between these types; therefore, it is assumed that similar benefits may be achieved with one or the other. Although there is not enough data to demonstrate that laughter is an all-around healing agent, this review concludes that there exists sufficient evidence to suggest that laughter has some positive, quantifiable effects on certain aspects of health. In this era of evidence-based medicine, it would be appropriate for laughter to be used as a complementary/alternative medicine in the prevention and treatment of illnesses, although further well-designed research is warranted.", "title": "" }, { "docid": "ed97eae85ce430d6358826fccef3c0e1", "text": "Heart diseases, which are one of the death reasons, are among the several serious problems in this century and as per the latest survey, 60% of the patients die due to Heart problems. These diseases can be diagnosed by ECG (Electrocardiogram) signals. ECG measures electrical potentials on the body surface via contact electrodes thus it is very important signal in cardiology. Different artifacts affect the ECG signals which can thus cause problems in analyzing the ECG Thus signal processing schemes are applied to remove those interferences. The work proposed in this paper is removal of low frequency interference i.e. baseline wandering in ECG signal and digital filters are designed to remove it. The digital filters designed are FIR with different windowing methods as of Rectangular, Gaussian, Hamming, and Kaiser. The results obtained are at a low order of 56. The signals are taken from the MIT-BIH database which includes the normal and abnormal waveforms. The work has been done in MAT LAB environment where filters are designed in FDA Tool. The parameters are selected such that the noise is removed permanently. Also the best results are obtained at an order of 56 which makes hardware implementation easier. The result obtained for all FIR filters with different windows are compared by comparing the waveforms and power spectrums of the original and filtered ECG signals. The filters which gives the best results is the one using Kaiser Window.", "title": "" }, { "docid": "e27575b8d7a7455f1a8f941adb306a04", "text": "Seung-Joon Yi GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yiseung@seas.upenn.edu Stephen G. McGill GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: smcgill3@seas.upenn.edu Larry Vadakedathu GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: vlarry@seas.upenn.edu Qin He GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: heqin@seas.upenn.edu Inyong Ha Robotis, Seoul, Korea e-mail: dudung@robotis.com Jeakweon Han Robotis, Seoul, Korea e-mail: jkhan@robotis.com Hyunjong Song Robotis, Seoul, Korea e-mail: hjsong@robotis.com Michael Rouleau RoMeLa, Virginia Tech, Blacksburg, Virginia 24061 e-mail: mrouleau@vt.edu Byoung-Tak Zhang BI Lab, Seoul National University, Seoul, Korea e-mail: btzhang@bi.snu.ac.kr Dennis Hong RoMeLa, University of California, Los Angeles, Los Angeles, California 90095 e-mail: dennishong@ucla.edu Mark Yim GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yim@seas.upenn.edu Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: ddlee@seas.upenn.edu", "title": "" }, { "docid": "a63bfd773444b0ac70700a840a844743", "text": "The utility of thermal inkjet (TIJ) technology for preparing solid dosage forms of drugs was examined. Solutions of prednisolone in a solvent mixture of ethanol, water, and glycerol (80/17/3 by volume) were dispensed onto poly(tetrafluoroethylene)-coated fiberglass films using TIJ cartridges and a personal printer and using a micropipette for comparison. The post-dried, TIJ-dispensed samples were shown to contain a mixture of prednisolone Forms I and III based on PXRD analyses that were confirmed by Raman analyses. The starting commercial material was determined to be Form I. Samples prepared by dispensing the solution from a micropipette initially showed only Form I; subsequent Raman mapping of these samples revealed the presence of two polymorphs. Raman mapping of the TIJ-dispensed samples also showed both polymorphs. The results indicate that the solvent mixture used in the dispensing solution combined with the thermal treatment of the samples after dispensing were likely the primary reason for the generation of the two polymorphs. The advantages of using a multidisciplinary approach to characterize drug delivery systems are demonstrated using solid state mapping techniques. Both PXRD and Raman spectroscopy were needed to fully characterize the samples. Finally, this report clarifies prednisolone's polymorphic nomenclature existent in the scientific literature.", "title": "" }, { "docid": "1efdb6ff65c1aa8f8ecb95b4d466335f", "text": "This paper provides a linguistic and pragmatic analysis of the phenomenon of irony in order to represent how Twitter’s users exploit irony devices within their communication strategies for generating textual contents. We aim to measure the impact of a wide-range of pragmatic phenomena in the interpretation of irony, and to investigate how these phenomena interact with contexts local to the tweet. Informed by linguistic theories, we propose for the first time a multi-layered annotation schema for irony and its application to a corpus of French, English and Italian tweets.We detail each layer, explore their interactions, and discuss our results according to a qualitative and quantitative perspective.", "title": "" }, { "docid": "dc7c0a60ad4da2e8356b831a905c0cf1", "text": "Including integral action in a nonlinear backstepping design is the topic of this paper. Two methods for adding integral feedback are proposed and analyzed. These are compared to the more traditional methods: 1) adaptive backstepping, and 2) plant augmentation that adds an extra relative degree and thus gives one extra step of backstepping. A test plant is used to compare the different control laws. Based on the theoretical analysis and the simulations, some interesting conclusions are made for each integral control strategy.", "title": "" }, { "docid": "07ca23b315d7ba087c10c066a5ea2266", "text": "This paper documents the extent to which occupational therapists use groups in practice. A questionnaire was mailed to 300 occupational therapists nationwide. Questions included the types of groups occupational therapists lead, the facilities in which the groups take place, the patients included, the activities presented, and individual and groups goals. Results were tabulated based on the responses of 120 therapists. We established that 60% of occupational therapists in all areas of practice lead groups in treatment. Of the 209 groups described by the respondents, there was a significantly greater number of activity groups than verbal groups. Also, there were significantly more groups with ten or less members than groups of more than ten. This paper describes the ten categories of groups that were identified in this study.", "title": "" }, { "docid": "64e37bb3cada08bd2b56b5fa806c4d07", "text": "Background: Statistical mechanics results (Dauphin et al. (2014); Choromanska et al. (2015)) suggest that local minima with high error are exponentially rare in high dimensions. However, to prove low error guarantees for Multilayer Neural Networks (MNNs), previous works so far required either a heavily modified MNN model or training method, strong assumptions on the labels (e.g., “near” linear separability), or an unrealistically wide hidden layer with Ω (N) units. Results: We examine a MNN with one hidden layer of piecewise linear units, a single output, and a quadratic loss. We prove that, with high probability in the limit of N → ∞ datapoints, the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given standard normal input of dimension d0 = Ω̃ (√ N ) , and a more realistic number of d1 = Ω̃ (N/d0) hidden units. We demonstrate our results numerically: for example, 0% binary classification training error on CIFAR with only N/d0 ≈ 16 hidden neurons.", "title": "" }, { "docid": "499fe7f6bf5c7d8fcfe690e7390a5d36", "text": "Compressional or traumatic asphyxia is a well recognized entity to most forensic pathologists. The vast majority of reported cases have been accidental. The case reported here describes the apparent inflicted compressional asphyxia of a small child. A review of mechanisms and related controversy regarding proposed mechanisms is discussed.", "title": "" }, { "docid": "490dc6ee9efd084ecf2496b72893a39a", "text": "The rise of blockchain-based cryptocurrencies has led to an explosion of services using distributed ledgers as their underlying infrastructure. However, due to inherently single-service oriented blockchain protocols, such services can bloat the existing ledgers, fail to provide sufficient security, or completely forego the property of trustless auditability. Security concerns, trust restrictions, and scalability limits regarding the resource requirements of users hamper the sustainable development of loosely-coupled services on blockchains. This paper introduces Aspen, a sharded blockchain protocol designed to securely scale with increasing number of services. Aspen shares the same trust model as Bitcoin in a peer-to-peer network that is prone to extreme churn containing Byzantine participants. It enables introduction of new services without compromising the security, leveraging the trust assumptions, or flooding users with irrelevant messages.", "title": "" }, { "docid": "4449b826b2a6acb5ce10a0bcacabc022", "text": "Centralized Resource Description Framework (RDF) repositories have limitations both in their failure tolerance and in their scalability. Existing Peer-to-Peer (P2P) RDF repositories either cannot guarantee to find query results, even if these results exist in the network, or require up-front definition of RDF schemas and designation of super peers. We present a scalable distributed RDF repository (RDFPeers) that stores each triple at three places in a multi-attribute addressable network by applying globally known hash functions to its subject predicate and object. Thus all nodes know which node is responsible for storing triple values they are looking for and both exact-match and range queries can be efficiently routed to those nodes. RDFPeers has no single point of failure nor elevated peers and does not require the prior definition of RDF schemas. Queries are guaranteed to find matched triples in the network if the triples exist. In RDFPeers both the number of neighbors per node and the number of routing hops for inserting RDF triples and for resolving most queries are logarithmic to the number of nodes in the network. We further performed experiments that show that the triple-storing load in RDFPeers differs by less than an order of magnitude between the most and the least loaded nodes for real-world RDF data.", "title": "" }, { "docid": "860894abbbafdcb71178cb9ddd173970", "text": "Twitter is useful in a situation of disaster for communication, announcement, request for rescue and so on. On the other hand, it causes a negative by-product, spreading rumors. This paper describe how rumors have spread after a disaster of earthquake, and discuss how can we deal with them. We first investigated actual instances of rumor after the disaster. And then we attempted to disclose characteristics of those rumors. Based on the investigation we developed a system which detects candidates of rumor from twitter and then evaluated it. The result of experiment shows the proposed algorithm can find rumors with acceptable accuracy.", "title": "" }, { "docid": "36af986f61252f221a8135e80fe6432d", "text": "This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recognized that intuitive theories play essential roles in organizing our most basic knowledge of the world, particularly for causal structures in physical, biological, psychological or social domains (Atran, 1995; Carey, 1985a; Kelley, 1973; McCloskey, 1983; Murphy & Medin, 1985; Nichols & Stich, 2003). A principal function of intuitive theories in these domains is to support the learning of new causal knowledge: generating and constraining people’s hypotheses about possible causal relations, highlighting variables, actions and observations likely to be informative about those hypotheses, and guiding people’s interpretation of the data they observe (Ahn & Kalish, 2000; Pazzani, 1987; Pazzani, Dyer & Flowers, 1986; Waldmann, 1996). Leading accounts of cognitive development argue for the importance of intuitive theories in children’s mental lives and frame the major transitions of cognitive development as instances of theory change (Carey, 1985a; Gopnik & Meltzoff, 1997; Inagaki & Hatano 2002; Wellman & Gelman, 1992). Here we attempt to lay out some prospects for understanding the structure, function, and acquisition of intuitive theories from a rational computational perspective. From this viewpoint, theory-like representations are not just a convenient way of summarizing certain aspects of human knowledge. They provide crucial foundations for successful learning and reasoning, and we want to understand how they do so. With this goal in mind, we focus on", "title": "" }, { "docid": "e8edd727e923595acc80df364bfc64af", "text": "Context: Architecture-centric software evolution (ACSE) enables changes in system’s structure and behaviour while maintaining a global view of the software to address evolution-centric trade-offs. The existing research and practices for ACSE primarily focus on design-time evolution and runtime adaptations to accommodate changing requirements in existing architectures. Objectives: We aim to identify, taxonomically classify and systematically compare the existing research focused on enabling or enhancing change reuse to support ACSE. Method: We conducted a systematic literature review of 32 qualitatively selected studies and taxonomically classified these studies based on solutions that enable (i) empirical acquisition and (ii) systematic application of architecture evolution reuse knowledge (AERK) to guide ACSE. Results: We identified six distinct research themes that support acquisition and application of AERK. We investigated (i) how evolution reuse knowledge is defined, classified and represented in the existing research to support ACSE and (ii) what are the existing methods, techniques and solutions to support empirical acquisition and systematic application of AERK. Conclusions: Change patterns (34% of selected studies) represent a predominant solution, followed by evolution styles (25%) and adaptation strategies and policies (22%) to enable application of reuse knowledge. Empirical methods for acquisition of reuse knowledge represent 19% including pattern discovery, configuration analysis, evolution and maintenance prediction techniques (approximately 6% each). A lack of focus on empirical acquisition of reuse knowledge suggests the need of solutions with architecture change mining as a complementary and integrated phase for architecture change execution. Copyright © 2014 John Wiley & Sons, Ltd. Received 13 May 2013; Revised 23 September 2013; Accepted 27 December 2013", "title": "" }, { "docid": "a5f557ddac63cd24a11c1490e0b4f6d4", "text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.", "title": "" }, { "docid": "c55de58c07352373570ec7d46c5df03d", "text": "Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.", "title": "" } ]
scidocsrr
7401f7870ae26de1f7f6561e2805ec48
OpenTuner: An extensible framework for program autotuning
[ { "docid": "e0724c87fd4344e01cb9260fdd36856c", "text": "In this paper we introduce a multi-objective auto-tuning framework comprising compiler and runtime components. Focusing on individual code regions, our compiler uses a novel search technique to compute a set of optimal solutions, which are encoded into a multi-versioned executable. This enables the runtime system to choose specifically tuned code versions when dynamically adjusting to changing circumstances.\n We demonstrate our method by tuning loop tiling in cache-sensitive parallel programs, optimizing for both runtime and efficiency. Our static optimizer finds solutions matching or surpassing those determined by exhaustively sampling the search space on a regular grid, while using less than 4% of the computational effort on average. Additionally, we show that parallelism-aware multi-versioning approaches like our own gain a performance improvement of up to 70% over solutions tuned for only one specific number of threads.", "title": "" } ]
[ { "docid": "9ec8a4b8e052b352775b5f6fb98ff914", "text": "For most of the existing commercial driver assistance systems the use of a single environmental sensor and a tracking model tied to the characteristics of this sensor is sufficient. When using a multi-sensor fusion approach with heterogeneous sensors the information available for tracking depends on the sensors detecting the object. This paper describes an approach where multiple models are used for tracking moving objects. The best model for tracking is chosen based on the available sensor information. The architecture of the tracking system along with the tracking models and algorithm for model selection are presented. The design of the architecture and algorithms allows an extension of the system with new sensors and tracking models without changing existing software. The approach was implemented and successfully used in Tartan Racing’s autonomous vehicle for the Urban Grand Challenge. The advantages of the multisensor approach are explained and practical results of a representative scenario are presented.", "title": "" }, { "docid": "b971d5de8444fbfb9951e97dcbe8bdc5", "text": "With the rapid development of Internet of Things (IoT), enormous events are produced by various kinds of devices at high speed. Complex Event Processing (CEP) is the key part of IoT middleware which can help the user to get semantic meanings of primitive events. Context-awareness is an important feature of CEP engine. In this paper a high performance distributed context-aware CEP architecture and method is proposed for internet of things. Context is modeled as fuzzy ontology to support uncertainty and linguistic variables in event queries. Based on fuzzy ontology query and similarity based distributed reasoning, complex event query plans are generated and context-aware queries are rewritten into context independent sub-queries. Data window is partitioned according to different event patterns and context. The sub-queries are optimized and executed parallel based on data partition. The experiments show that this method can support fuzzy context in CEP and have acceptable performance and scalability.", "title": "" }, { "docid": "1613f06fd110bdf468c3cbaae546a67c", "text": "Identifying causal mechanisms is a fundamental goal of social science. Researchers seek to study not only whether one variable affects another but also how such a causal relationship arises. Yet commonly used statistical methods for identifying causal mechanisms rely upon untestable assumptions and are often inappropriate even under those assumptions. Randomizing treatment and intermediate variables is also insufficient. Despite these difficulties, the study of causal mechanisms is too important to abandon. We make three contributions to improve research on causal mechanisms. First, we present a minimum set of assumptions required under standard designs of experimental and observational studies and develop a general algorithm for estimating causal mediation effects. Second, we provide a method for assessing the sensitivity of conclusions to potential violations of a key assumption. Third, we offer alternative research designs for identifying causal mechanisms under weaker assumptions. The proposed approach is illustrated using media framing experiments and incumbency advantage studies.", "title": "" }, { "docid": "d1fa477646e636a3062312d6f6444081", "text": "This paper proposes a novel attention model for semantic segmentation, which aggregates multi-scale and context features to refine prediction. Specifically, the skeleton convolutional neural network framework takes in multiple different scales inputs, by which means the CNN can get representations in different scales. The proposed attention model will handle the features from different scale streams respectively and integrate them. Then location attention branch of the model learns to softly weight the multi-scale features at each pixel location. Moreover, we add an recalibrating branch, parallel to where location attention comes out, to recalibrate the score map per class. We achieve quite competitive results on PASCAL VOC 2012 and ADE20K datasets, which surpass baseline and related works.", "title": "" }, { "docid": "a4d418c4548d5866c55fd06bb4085d79", "text": "The Internet of Things (IoT) comprises a complex network of smart devices, which frequently exchange data through the Internet. Given the significant growth of IoT as a new technological paradigm, which may involve safety-critical operations and sensitive data to be put online, its security aspect is vital. This paper studies the network security matters in the smart home, health care and transportation domains. It is possible that the interruption might occur in IoT devices during operation causing them to be in the shutdown mode. Taxonomy of security attacks within IoT networks is constructed to assist IoT developers for better awareness of the risk of security flaws so that better protections shall be incorporated.", "title": "" }, { "docid": "0aa84826291bb9b7a15a1edac43b3b2e", "text": "Reservoir computing (RC), a computational paradigm inspired on neural systems, has become increasingly popular in recent years for solving a variety of complex recognition and classification problems. Thus far, most implementations have been software-based, limiting their speed and power efficiency. Integrated photonics offers the potential for a fast, power efficient and massively parallel hardware implementation. We have previously proposed a network of coupled semiconductor optical amplifiers as an interesting test case for such a hardware implementation. In this paper, we investigate the important design parameters and the consequences of process variations through simulations. We use an isolated word recognition task with babble noise to evaluate the performance of the photonic reservoirs with respect to traditional software reservoir implementations, which are based on leaky hyperbolic tangent functions. Our results show that the use of coherent light in a well-tuned reservoir architecture offers significant performance benefits. The most important design parameters are the delay and the phase shift in the system's physical connections. With optimized values for these parameters, coherent semiconductor optical amplifier (SOA) reservoirs can achieve better results than traditional simulated reservoirs. We also show that process variations hardly degrade the performance, but amplifier noise can be detrimental. This effect must therefore be taken into account when designing SOA-based RC implementations.", "title": "" }, { "docid": "6a851f4fdd456dbaef547a63d53c7a5a", "text": "In the 20th century, the introduction of multiple vaccines significantly reduced childhood morbidity, mortality, and disease outbreaks. Despite, and perhaps because of, their public health impact, an increasing number of parents and patients are choosing to delay or refuse vaccines. These individuals are described as \"vaccine hesitant.\" This phenomenon has developed due to the confluence of multiple social, cultural, political, and personal factors. As immunization programs continue to expand, understanding and addressing vaccine hesitancy will be crucial to their successful implementation. This review explores the history of vaccine hesitancy, its causes, and suggested approaches for reducing hesitancy and strengthening vaccine acceptance.", "title": "" }, { "docid": "3c9b28e47b492e329043941f4ff088b7", "text": "The importance of motion in attracting attention is well known. While watching videos, where motion is prevalent, how do we quantify the regions that are motion salient? In this paper, we investigate the role of motion in attention and compare it with the influence of other low-level features like image orientation and intensity. We propose a framework for motion saliency. In particular, we integrate motion vector information with spatial and temporal coherency to generate a motion attention map. The results show that our model achieves good performance in identifying regions that are moving and salient. We also find motion to have greater influence on saliency than other low-level features when watching videos.", "title": "" }, { "docid": "c6c04fe37b540df1ab54f31dd01afef6", "text": "A backtracking algorithm for testing a pair of digraphs for isomorphism is presented. The information contained in the distance matrix representation of a graph is used to establish an initial partition of the graph's vertices. This distance matrix information is then applied in a backtracking procedure to reduce the search tree of possible mappings. While the algorithm is not guaranteed to run in polynomial time, it performs efficiently for a large class of graphs.", "title": "" }, { "docid": "c3245c1c762db74a99b3196896d2ad52", "text": "Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph abstraction for automatic gradient computation.", "title": "" }, { "docid": "eaddba3b27a3a1faf9e957917d102d3f", "text": "Some recent modifications of the protein assay by the method of Lowry, Rosebrough, Farr, and Randall (1951, .I. Biol. Chem. 193, 265-275) have been reexamined and altered to provide a consolidated method which is simple, rapid, objective, and more generally applicable. A DOC-TCA protein precipitation technique provides for rapid quantitative recovery of soluble and membrane proteins from interfering substances even in very dilute solutions (< 1 pg/ml of protein). SDS is added to alleviate possible nonionic and cationic detergent and lipid interferences, and to provide mild conditions for rapid denaturation of membrane and proteolipid proteins. A simple method based on a linear log-log protein standard curve is presented to permit rapid and totally objective protein analysis using small programmable calculators. The new modification compared favorably with the original method of Lowry ef al.", "title": "" }, { "docid": "058a4f93fb5c24c0c9967fca277ee178", "text": "We report on the SUM project which applies automatic summarisation techniques to the legal domain. We describe our methodology whereby sentences from the text are classified according to their rhetorical role in order that particular types of sentence can be extracted to form a summary. We describe some experiments with judgments of the House of Lords: we have performed automatic linguistic annotation of a small sample set and then hand-annotated the sentences in the set in order to explore the relationship between linguistic features and argumentative roles. We use state-of-the-art NLP techniques to perform the linguistic annotation using XML-based tools and a combination of rule-based and statistical methods. We focus here on the predictive capacity of tense and aspect features for a classifier.", "title": "" }, { "docid": "a473465e2e567f260089bb39806f79a6", "text": "The objective of the study presented was to determine the prevalence of oral problems--eg, dental erosion, rough surfaces, pain--among young competitive swimmers in India, because no such studies are reported. Its design was a cross-sectional study with a questionnaire and clinical examination protocols. It was conducted in a community setting on those who were involved in regular swimming in pools. Questionnaires were distributed to swimmers at the 25th State Level Swimming Competition, held at Thane Municipal Corporation's Swimming Pool, India. Those who returned completed questionnaires were also clinically examined. Questionnaires were analyzed and clinical examinations focused on either the presence or absence of dental erosions and rough surfaces. Reported results were on 100 swimmers who met the inclusion criteria. They included 75 males with a mean age of 18.6 ± 6.3 years and 25 females with a mean age of 15.3 ± 7.02 years. Among them, 90% showed dental erosion, 94% exhibited rough surfaces, and 88% were found to be having tooth pain of varying severity. Erosion and rough surfaces were found to be directly proportional to the duration of swimming. The authors concluded that the prevalence of dental erosion, rough surfaces, and pain is found to be very common among competitive swimmers. They recommend that swimmers practice good preventive measures and clinicians evaluate them for possible swimmer's erosion.", "title": "" }, { "docid": "c6ba253d2981a97fff50c7b7c9c894f6", "text": "Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-ofthe-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate", "title": "" }, { "docid": "2e5ad502de49f7ae72a2591876811a53", "text": "Massive Machine-Type Communications (mMTC) presents significant challenges in terms of the number of devices accessing the shared resource, and coordination among those devices. This paper overviews work on RAN congestion control in order to better manage resources in the context of device-to-device (D2D) interaction among the MTCDs. It then proceeds to introduce a novel grouping-assisted random access protocol for mMTC, showing beneficial performance of the concept against parameters such as group size, number of MTCDs in the overall scenario, and reliability of D2D links. Finally, the association is made with a Geolocation Database (GDB) capability to assist the grouping decisions, drawing parallels with recent regulatory-driven initiatives around GDBs, and arguing benefits of the concept.", "title": "" }, { "docid": "48370cc694460cc6900213a69f13b5a5", "text": "This paper describes a strategy to feature point correspondence and motion recovery in vehicle navigation. A transformation of the image plane is proposed that keeps the motion of the vehicle on a plane parallel to the transformed image plane. This permits to de\"ne linear tracking \"lters to estimate the real-world positions of the features, and allows us to select the matches that accomplish the rigidity of the scene by a Hough transform. Candidate correspondences are selected by similarity, taking into account the smoothness of motion. Further processing brings out the \"nal matching. The methods have been tested in a real application. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "a921c4eba2d9590b9b8f4679349c985b", "text": "Advances in micro-electro-mechanical (MEMS) techniques enable inertial measurements units (IMUs) to be small, cheap, energy efficient, and widely used in smartphones, robots, and drones. Exploiting inertial data for accurate and reliable navigation and localization has attracted significant research and industrial interest, as IMU measurements are completely ego-centric and generally environment agnostic. Recent studies have shown that the notorious issue of drift can be significantly alleviated by using deep neural networks (DNNs) [1]. However, the lack of sufficient labelled data for training and testing various architectures limits the proliferation of adopting DNNs in IMU-based tasks. In this paper, we propose and release the Oxford Inertial Odometry Dataset (OxIOD), a first-of-its-kind data collection for inertial-odometry research, with all sequences having ground-truth labels. Our dataset contains 158 sequences totalling more than 42 km in total distance, much larger than previous inertial datasets. Another notable feature of this dataset lies in its diversity, which can reflect the complex motions of phone-based IMUs in various everyday usage. The measurements were collected with four different attachments (handheld, in the pocket, in the handbag and on the trolley), four motion modes (halting, walking slowly, walking normally, and running), five different users, four types of off-the-shelf consumer phones, and large-scale localization from office buildings. Deep inertial tracking experiments were conducted to show the effectiveness of our dataset in training deep neural network models and evaluate learning-based and model-based algorithms. The OxIOD Dataset is available at: http://deepio.cs.ox.ac.uk", "title": "" }, { "docid": "6883add239f58223ef1941d5044d4aa8", "text": "A novel jitter equalization circuit is presented that addresses crosstalk-induced jitter in high-speed serial links. A simple model of electromagnetic coupling demonstrates the generation of crosstalk-induced jitter. The analysis highlights unique aspects of crosstalk-induced jitter that differ from far-end crosstalk. The model is used to predict the crosstalk-induced jitter in 2-PAM and 4-PAM, which is compared to measurement. Furthermore, the model suggests an equalizer that compensates for the data-induced electromagnetic coupling between adjacent links and is suitable for pre- or post-emphasis schemes. The circuits are implemented using 130-nm MOSFETs and operate at 5-10 Gb/s. The results demonstrate reduced deterministic jitter and lower bit-error rate (BER). At 10 Gb/s, the crosstalk-induced jitter equalizer opens the eye at 10/sup -12/ BER from 17 to 45 ps and lowers the rms jitter from 8.7 to 6.3 ps.", "title": "" }, { "docid": "19d7bb6102897541fbc605b07f4c5483", "text": "This article surveys the principal generative syntactic analyses that have been proposed for ergativity, found primarily in Inuit, Austronesian, Mayan, and Pama-Nyungan language families. The main puzzle for generative grammar is how to analyze the behavior of ergative and absolutive arguments in terms of the grammatical functions of subject and object. I show in this article that early approaches tend to treat the absolutive uniformly as a subject or an object, while later analyses move toward disassociating case from grammatical function. Descriptively speaking, this article identifies two types of morphological ergativity, differing in how absolutive case is assigned. Morphological ergativity is also distinguished from syntactic ergativity, which is characterized primarily by a restriction that only absolutives can undergo A’-movement. In other aspects of the grammar, ergativity is not strikingly different from accusativity.", "title": "" }, { "docid": "174406f7c5dabb3007158987d35d6de2", "text": "In this paper, we propose a toolkit for efficient and privacy-preserving outsourced calculation under multiple encrypted keys (EPOM). Using EPOM, a large scale of users can securely outsource their data to a cloud server for storage. Moreover, encrypted data belonging to multiple users can be processed without compromising on the security of the individual user's (original) data and the final computed results. To reduce the associated key management cost and private key exposure risk in EPOM, we present a distributed two-trapdoor public-key cryptosystem, the core cryptographic primitive. We also present the toolkit to ensure that the commonly used integer operations can be securely handled across different encrypted domains. We then prove that the proposed EPOM achieves the goal of secure integer number processing without resulting in privacy leakage of data to unauthorized parties. Last, we demonstrate the utility and the efficiency of EPOM using simulations.", "title": "" } ]
scidocsrr
ca453e5ffcf0e79d74d4481f36059175
The Painful Shoulder: Shoulder Impingement Syndrome
[ { "docid": "b52312f9fbf86ce0dbf475623b472d8d", "text": "The vascular pattern of the supraspinatus tendon was studied in 18 human anatomic specimens. The ages of the specimens ranged from 26 to 84 years. Selective vascular injection with a silicon-rubber compound allowed visualization of the vascular bed of the rotator cuff and humeral head. The presence of a hypovascular or critical zone close to the insertion of the supraspinatus tendon into the humeral head was confirmed. However, only a uniformly sparse vascular distribution was found at the articular side, as opposed to the well-vascularized bursal side. This was also confirmed with histologic sections of the tendon. The poor vascularity of the tendon in this area could be a significant factor in the pathogenesis of degenerative rotator cuff tears.", "title": "" } ]
[ { "docid": "e602ab2a2d93a8912869ae8af0925299", "text": "Software-based MMU emulation lies at the heart of outof-VM live memory introspection, an important technique in the cloud setting that applications such as live forensics and intrusion detection depend on. Due to the emulation, the software-based approach is much slower compared to native memory access by the guest VM. The slowness not only results in undetected transient malicious behavior, but also inconsistent memory view with the guest; both undermine the effectiveness of introspection. We propose the immersive execution environment (ImEE) with which the guest memory is accessed at native speed without any emulation. Meanwhile, the address mappings used within the ImEE are ensured to be consistent with the guest throughout the introspection session. We have implemented a prototype of the ImEE on Linux KVM. The experiment results show that ImEE-based introspection enjoys a remarkable speed up, performing several hundred times faster than the legacy method. Hence, this design is especially useful for realtime monitoring, incident response and high-intensity introspection.", "title": "" }, { "docid": "3083b478e4b489de4c8b356ed64c65c9", "text": "The proliferation of soft robotics research worldwide has brought substantial achievements in terms of principles, models, technologies, techniques, and prototypes of soft robots. Such achievements are reviewed here in terms of the abilities that they provide robots that were not possible before. An analysis of the evolution of this field shows how, after a fewpioneeringworks in the years 2009 to 2012, breakthrough resultswere obtainedby taking seminal technological and scientific challenges related to soft robotics from actuation and sensing to modeling and control. Further progress in soft robotics research hasproduced achievements that are important in termsof robot abilities—that is, from the viewpoint of what robots can do today thanks to the soft robotics approach. Abilities such as squeezing, stretching, climbing, growing, andmorphingwouldnotbepossiblewith anapproachbasedonly on rigid links. The challengeahead for soft robotics is to further develop theabilities for robots togrow, evolve, self-heal, develop, andbiodegrade,which are the ways that robots can adapt their morphology to the environment. lad", "title": "" }, { "docid": "01e419d399bd19b9ed1c34c67f1767a9", "text": "By using music written in a certain style as training data, parameters can be calculated for Markov chains and hidden Markov models to capture the musical style of the training data as mathematical models.", "title": "" }, { "docid": "ada8c64a2e5c7be58a2200e8d1f64063", "text": "Nitrogen-containing bioactive alkaloids of plant origin play a significant role in human health and medicine. Several semisynthetic antimitotic alkaloids are successful in anticancer drug development. Gloriosa superba biosynthesizes substantial quantities of colchicine, a bioactive molecule for gout treatment. Colchicine also has antimitotic activity, preventing growth of cancer cells by interacting with microtubules, which could lead to the design of better cancer therapeutics. Further, several colchicine semisynthetics are less toxic than colchicine. Research is being conducted on effective, less toxic colchicine semisynthetic formulations with potential drug delivery strategies directly targeting multiple solid cancers. This article reviews the dynamic state of anticancer drug development from colchicine semisynthetics and natural colchicine production and briefly discusses colchicine biosynthesis.", "title": "" }, { "docid": "1bfe17bba2d4a846f5745283594c1464", "text": "Software engineers need to be able to create, modify, and analyze knowledge stored in software artifacts. A significant amount of these artifacts contain natural language, like version control commit messages, source code comments, or bug reports. Integrated software development environments (IDEs) are widely used, but they are only concerned with structured software artifacts – they do not offer support for analyzing unstructured natural language and relating this knowledge with the source code. We present an integration of natural language processing capabilities into the Eclipse framework, a widely used software IDE. It allows to execute NLP analysis pipelines through the Semantic Assistants framework, a service-oriented architecture for brokering NLP services based on GATE. We demonstrate a number of semantic analysis services helpful in software engineering tasks, and evaluate one task in detail, the quality analysis of source code comments.", "title": "" }, { "docid": "e592ccd706b039b12cc4e724a7b217cd", "text": "In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol.", "title": "" }, { "docid": "aa5c22fa803a65f469236d2dbc5777a3", "text": "This article presents data on CVD and risk factors in Asian women. Data were obtained from available cohort studies and statistics for mortality from the World Health Organization. CVD is becoming an important public health problem among Asian women. There are high rates of CHD mortality in Indian and Central Asian women; rates are low in southeast and east Asia. Chinese and Indian women have very high rates and mortality from stroke; stroke is also high in central Asian and Japanese women. Hypertension and type 2 DM are as prevalent as in western women, but rates of obesity and smoking are less common. Lifestyle interventions aimed at prevention are needed in all areas.", "title": "" }, { "docid": "e75d3488f38e08a7e83970f444675069", "text": "In 1950, Gräfenberg described a distinct erotogenic zone on the anterior wall of the vagina, which was referred to as the Gräfenberg spot (G-spot) by Addiego, Whipple (a nurse) et al. in 1981. As a result, the G-spot has become a central topic of popular speculation and a basis of a huge business surrounding it. In our opinion, these sexologists have made a hotchpotch of Gräfenberg’s thoughts and ideas that were set forth and expounded in his 1950 article: the intraurethral glands are not the corpus spongiosum of the female urethra, and Gräfenberg did not report an orgasm of the intraurethral glands. G-spot amplification is a cosmetic surgery procedure for temporarily increasing the size and sensitivity of the G-spot in which a dermal filler or a collagen-like material is injected into the bladder–vaginal septum. All published scientific data point to the fact that the G-spot does not exist, and the supposed G-spot should not be identified with Gräfenberg’s name. Moreover, G-spot amplification is not medically indicated and is an unnecessary and inefficacious medical procedure.", "title": "" }, { "docid": "040e5e800895e4c6f10434af973bec0f", "text": "The authors investigated the effect of action gaming on the spatial distribution of attention. The authors used the flanker compatibility effect to separately assess center and peripheral attentional resources in gamers versus nongamers. Gamers exhibited an enhancement in attentional resources compared with nongamers, not only in the periphery but also in central vision. The authors then used a target localization task to unambiguously establish that gaming enhances the spatial distribution of visual attention over a wide field of view. Gamers were more accurate than nongamers at all eccentricities tested, and the advantage held even when a concurrent center task was added, ruling out a trade-off between central and peripheral attention. By establishing the causal role of gaming through training studies, the authors demonstrate that action gaming enhances visuospatial attention throughout the visual field.", "title": "" }, { "docid": "1a5ddde73f38ab9b2563540c36c222c0", "text": "This paper presents a self-adaptive autonomous online learning through a general type-2 fuzzy system (GT2 FS) for the motor imagery (MI) decoding of a brain-machine interface (BMI) and navigation of a bipedal humanoid robot in a real experiment, using electroencephalography (EEG) brain recordings only. GT2 FSs are applied to BMI for the first time in this study. We also account for several constraints commonly associated with BMI in real practice: 1) the maximum number of EEG channels is limited and fixed; 2) no possibility of performing repeated user training sessions; and 3) desirable use of unsupervised and low-complexity feature extraction methods. The novel online learning method presented in this paper consists of a self-adaptive GT2 FS that can autonomously self-adapt both its parameters and structure via creation, fusion, and scaling of the fuzzy system rules in an online BMI experiment with a real robot. The structure identification is based on an online GT2 Gath–Geva algorithm where every MI decoding class can be represented by multiple fuzzy rules (models), which are learnt in a continous (trial-by-trial) non-iterative basis. The effectiveness of the proposed method is demonstrated in a detailed BMI experiment, in which 15 untrained users were able to accurately interface with a humanoid robot, in a single session, using signals from six EEG electrodes only.", "title": "" }, { "docid": "231d7797961326974ca3a3d2271810ae", "text": "Agile methods form an alternative to waterfall methodologies. Little is known about activity composition, the proportion of varying activities in agile processes and the extent to which the proportions of activities differ from \"waterfall\" processes. In the current study, we examine the variation in per formative routines in one large agile and traditional lifecycle project using an event sequencing method. Our analysis shows that the enactment of waterfall and agile routines differ significantly suggesting that agile process is composed of fewer activities which are repeated iteratively1.", "title": "" }, { "docid": "5e6c16c5d65d855eaf60aa2295bab5f5", "text": "The objective of positive education is not only to improve students' well-being but also their academic performance. As an important concept in positive education, growth mindset refers to core assumptions about the malleability of a person's intellectual abilities. The present study investigates the relation of growth mindsets to psychological well-being and school engagement. The study also explores the mediating function of resilience in this relation. We recruited a total of 1260 (658 males and 602 females) Chinese students from five diversified primary and middle schools. Results from the structural equation model show that the development of high levels of growth mindsets in students predicts higher psychological well-being and school engagement through the enhancement of resilience. The current study contributes to our understanding of the potential mechanisms by which positive education (e.g., altering the mindset of students) can impact psychological well-being and school engagement.", "title": "" }, { "docid": "22d17576fef96e5fcd8ef3dd2fb0cc5f", "text": "I n a previous article (\" Agile Software Development: The Business of Innovation , \" Computer, Sept. 2001, pp. 120-122), we introduced agile software development through the problem it addresses and the way in which it addresses the problem. Here, we describe the effects of working in an agile style. Over recent decades, while market forces, systems requirements, implementation technology, and project staff were changing at a steadily increasing rate, a different development style showed its advantages over the traditional one. This agile style of development directly addresses the problems of rapid change. A dominant idea in agile development is that the team can be more effective in responding to change if it can • reduce the cost of moving information between people, and • reduce the elapsed time between making a decision to seeing the consequences of that decision. To reduce the cost of moving information between people, the agile team works to • place people physically closer, • replace documents with talking in person and at whiteboards, and • improve the team's amicability—its sense of community and morale— so that people are more inclined to relay valuable information quickly. To reduce the time from decision to feedback, the agile team • makes user experts available to the team or, even better, part of the team and • works incrementally. Making user experts available as part of the team gives developers rapid feedback on the implications to the user of their design choices. The user experts, seeing the growing software in its earliest stages, learn both what the developers misunderstood and also which of their requests do not work as well in practice as they had thought. The term agile, coined by a group of people experienced in developing software this way, has two distinct connotations. The first is the idea that the business and technology worlds have become turbulent , high speed, and uncertain, requiring a process to both create change and respond rapidly to change. The first connotation implies the second one: An agile process requires responsive people and organizations. Agile development focuses on the talents and skills of individuals and molds process to specific people and teams, not the other way around. The most important implication to managers working in the agile manner is that it places more emphasis on people factors in the project: amicability, talent, skill, and communication. These qualities become a primary concern …", "title": "" }, { "docid": "092b55732087aef57a1164c228c00d8b", "text": "Penetration of advanced sensor systems such as advanced metering infrastructure (AMI), high-frequency overhead and underground current and voltage sensors have been increasing significantly in power distribution systems over the past few years. According to U.S. energy information administration (EIA), the aggregated AMI installation experienced a 17 times increase from 2007 to 2012. The AMI usually collects electricity usage data every 15 minute, instead of once a month. This is a 3,000 fold increase in the amount of data utilities would have processed in the past. It is estimated that the electricity usage data collected through AMI in the U.S. amount to well above 100 terabytes in 2012. To unleash full value of the complex data sets, innovative big data algorithms need to be developed to transform the way we operate and plan for the distribution system. This paper not only proposes promising applications but also provides an in-depth discussion of technical and regulatory challenges and risks of big data analytics in power distribution systems. In addition, a flexible system architecture design is proposed to handle heterogeneous big data analysis workloads.", "title": "" }, { "docid": "1b13208e3f8b70dbee13cf0bff2203b8", "text": "A variation on an existing antenna feed system for use on simultaneous X/Ka-band satellite ground terminals is presented. The modified design retains the important functionality of the existing feed system, using a simplified approach that aims to significantly reduce the weight and the cost of manufacture.", "title": "" }, { "docid": "91d3008dcd6c351d6cc0187c59cad8df", "text": "Peer-to-peer markets such as eBay, Uber, and Airbnb allow small suppliers to compete with traditional providers of goods or services. We view the primary function of these markets as making it easy for buyers to …nd sellers and engage in convenient, trustworthy transactions. We discuss elements of market design that make this possible, including search and matching algorithms, pricing, and reputation systems. We then develop a simple model of how these markets enable entry by small or ‡exible suppliers, and the resulting impact on existing …rms. Finally, we consider the regulation of peer-to-peer markets, and the economic arguments for di¤erent approaches to licensing and certi…cation, data and employment regulation. We appreciate support from the National Science Foundation, the Stanford Institute for Economic Policy Research, the Toulouse Network on Information Technology, and the Alfred P. Sloan Foundation. yEinav and Levin: Department of Economics, Stanford University and NBER. Farronato: Harvard Business School. Email: leinav@stanford.edu, chiarafarronato@gmail.com, jdlevin@stanford.edu.", "title": "" }, { "docid": "b7f15089db3f5d04c1ce1d5f09b0b1f0", "text": "Despite the flourishing research on the relationships between affect and language, the characteristics of pain-related words, a specific type of negative words, have never been systematically investigated from a psycholinguistic and emotional perspective, despite their psychological relevance. This study offers psycholinguistic, affective, and pain-related norms for words expressing physical and social pain. This may provide a useful tool for the selection of stimulus materials in future studies on negative emotions and/or pain. We explored the relationships between psycholinguistic, affective, and pain-related properties of 512 Italian words (nouns, adjectives, and verbs) conveying physical and social pain by asking 1020 Italian participants to provide ratings of Familiarity, Age of Acquisition, Imageability, Concreteness, Context Availability, Valence, Arousal, Pain-Relatedness, Intensity, and Unpleasantness. We also collected data concerning Length, Written Frequency (Subtlex-IT), N-Size, Orthographic Levenshtein Distance 20, Neighbor Mean Frequency, and Neighbor Maximum Frequency of each word. Interestingly, the words expressing social pain were rated as more negative, arousing, pain-related, and conveying more intense and unpleasant experiences than the words conveying physical pain.", "title": "" }, { "docid": "0cd077bec6516b3cdb86a8ccd185df78", "text": "In this paper, a general purpose multi-agent classifier system based on the blackboard architecture using reinforcement Learning techniques is proposed for tackling complex data classification problems. A trust metric for evaluating agent’s performance and expertise based on Q-learning and employing different voting processes is formulated. Specifically, multiple heterogeneous machine learning agents, are devised to form the expertise group for the proposed Coordinated Heterogeneous Intelligent Multi-Agent Classifier System (CHIMACS). To evaluate the effectiveness of CHIMACS, a variety of benchmark problems are used, including small and high dimensional datasets with and without noise. The results from CHIMACS are compared with those of individual ML models and ensemble methods. The results indicate that CHIMACS is effective in identifying classifier agent expertise and can combine their knowledge to improve the overall prediction performance.", "title": "" }, { "docid": "1dccd5745d29310e2ca1b9f302efd0bb", "text": "Graph structure which is often used to model the relationship between the data items has drawn more and more attention. The graph datasets from many important domains have the property called scale-free. In the scale-free graphs, there exist the hubs, which have much larger degree than the average value. The hubs may cause the problems of load imbalance, poor scalability and high communication overhead when the graphs are processed in the distributed memory systems. In this paper, we design an asynchronous graph processing framework targeted for distributed memory by considering the hubs as a separate part of the vertexes, which we call it the hub-centric idea. Specifically speaking, a hub-duplicate graph partitioning method is proposed to balance the workload and reduce the communication overhead. At the same time, an efficient asynchronous state synchronization method for the duplicates is also proposed. In addition, a priority scheduling strategy is applied to further reduce the communication overhead.", "title": "" }, { "docid": "fff6c1ca2fde7f50c3654f1953eb97e6", "text": "This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.", "title": "" } ]
scidocsrr
f336ae2a176b853c1175811f8ea32d62
A Data Mining-Based Solution for Detecting Suspicious Money Laundering Cases in an Investment Bank
[ { "docid": "e67dc912381ebbae34d16aad0d3e7d92", "text": "In this paper, we study the problem of applying data mining to facilitate the investigation of money laundering crimes (MLCs). We have identified a new paradigm of problems --- that of automatic community generation based on uni-party data, the data in which there is no direct or explicit link information available. Consequently, we have proposed a new methodology for Link Discovery based on Correlation Analysis (LDCA). We have used MLC group model generation as an exemplary application of this problem paradigm, and have focused on this application to develop a specific method of automatic MLC group model generation based on timeline analysis using the LDCA methodology, called CORAL. A prototype of CORAL method has been implemented, and preliminary testing and evaluations based on a real MLC case data are reported. The contributions of this work are: (1) identification of the uni-party data community generation problem paradigm, (2) proposal of a new methodology LDCA to solve for problems in this paradigm, (3) formulation of the MLC group model generation problem as an example of this paradigm, (4) application of the LDCA methodology in developing a specific solution (CORAL) to the MLC group model generation problem, and (5) development, evaluation, and testing of the CORAL prototype in a real MLC case data.", "title": "" }, { "docid": "a704582d5a3019a2c714e349347a402e", "text": "Today, money laundering (ML) poses a serious threat not only to financial institutions but also to the nation. This criminal activity is becoming more and more sophisticated and seems to have moved from the cliché of drug trafficking to financing terrorism and surely not forgetting personal gain. Most international financial institutions have been implementing anti-money laundering solutions (AML) to fight investment fraud. However, traditional investigative techniques consume numerous man-hours. Recently, data mining approaches have been developed and are considered as well-suited techniques for detecting ML activities. Within the scope of a collaboration project for the purpose of developing a new solution for the AML Units in an international investment bank, we proposed a data mining-based solution for AML. In this paper, we present a heuristics approach to improve the performance for this solution. We also show some preliminary results associated with this method on analysing transaction datasets. Keywords—data mining, anti money laundering, clustering, heuristics.", "title": "" }, { "docid": "0a0f4f5fc904c12cacb95e87f62005d0", "text": "This text is intended to provide a balanced introduction to machine vision. Basic concepts are introduced with only essential mathematical elements. The details to allow implementation and use of vision algorithm in practical application are provided, and engineering aspects of techniques are emphasized. This text intentionally omits theories of machine vision that do not have sufficient practical applications at the time.", "title": "" } ]
[ { "docid": "a96d8a1763da1e806a8044f2b9338507", "text": "Performing cellular long term evolution (LTE) communications in unlicensed spectrum using licensed assisted access LTE (LTE-LAA) is a promising approach to overcome wireless spectrum scarcity. However, to reap the benefits of LTE-LAA, a fair coexistence mechanism with other incumbent WiFi deployments is required. In this paper, a novel deep learning approach is proposed for modeling the resource allocation problem of LTE-LAA small base stations (SBSs). The proposed approach enables multiple SBSs to proactively perform dynamic channel selection, carrier aggregation, and f ractional spectrum access while guaranteeing fairness with existing WiFi networks and other LTE-LAA operators. Adopting a proactive coexistence mechanism enables future delay-tolerant LTE-LAA data demands to be served within a given prediction window ahead of their actual arrival time thus avoiding the underutilization of the unlicensed spectrum during off-peak hours while maximizing the total served LTE-LAA traffic load. To this end, a noncooperative game model is formulated in which SBSs are modeled as homo egualis agents that aim at predicting a sequence of future actions and thus achieving long-term equal weighted fairness with wireless local area network and other LTE-LAA operators over a given time horizon. The proposed deep learning algorithm is then shown to reach a mixed-strategy Nash equilibrium, when it converges. Simulation results using real data traces show that the proposed scheme can yield up to 28% and 11% gains over a conventional reactive approach and a proportional fair coexistence mechanism, respectively. The results also show that the proposed framework prevents WiFi performance degradation for a densely deployed LTE-LAA network.", "title": "" }, { "docid": "2171597ce533ccaae30b870217b84813", "text": "Task performance data and subjective assessment data are widely used as usability measures in the human-computer interaction (HCI) field. Recently, physiology has also been explored as a metric for evaluating usability. However, it is not clear how physiological measures relate to traditional usability evaluation measures. In this paper, we investigated the relationships among three kinds of data: task performance, subjective assessment and physiological measures. We found evidence that physiological data correlate with task performance data in a video game: with a decrease of the task performance level, the normalized galvanic skin response (GSR) increases. In addition, physiological data are mirrored in subjective reports assessing stress level. The research provides an initial step toward establishing a new usability method using physiology as a complementary measure for traditional HCI evaluation.", "title": "" }, { "docid": "d4d46f30a1e918f89948110dc9c36464", "text": "Many real-world problems involve the optimization of multiple, possibly conflicting objectives. Multi-objective reinforcement learning (MORL) is a generalization of standard reinforcement learning where the scalar reward signal is extended to multiple feedback signals, in essence, one for each objective. MORL is the process of learning policies that optimize multiple criteria simultaneously. In this paper, we present a novel temporal difference learning algorithm that integrates the Pareto dominance relation into a reinforcement learning approach. This algorithm is a multi-policy algorithm that learns a set of Pareto dominating policies in a single run. We name this algorithm Pareto Q-learning and it is applicable in episodic environments with deterministic as well as stochastic transition functions. A crucial aspect of Pareto Q-learning is the updating mechanism that bootstraps sets of Q-vectors. One of our main contributions in this paper is a mechanism that separates the expected immediate reward vector from the set of expected future discounted reward vectors. This decomposition allows us to update the sets and to exploit the learned policies consistently throughout the state space. To balance exploration and exploitation during learning, we also propose three set evaluation mechanisms. These three mechanisms evaluate the sets of vectors to accommodate for standard action selection strategies, such as -greedy. More precisely, these mechanisms use multi-objective evaluation principles such as the hypervolume measure, the cardinality indicator and the Pareto dominance relation to select the most promising actions. We experimentally validate the algorithm on multiple environments with two and three objectives and we demonstrate that Pareto Q-learning outperforms current state-of-the-art MORL algorithms with respect to the hypervolume of the obtained policies. We note that (1) Pareto Q-learning is able to learn the entire Pareto front under the usual assumption that each state-action pair is sufficiently sampled, while (2) not being biased by the shape of the Pareto front. Furthermore, (3) the set evaluation mechanisms provide indicative measures for local action selection and (4) the learned policies can be retrieved throughout the state and action space.", "title": "" }, { "docid": "831e768b1e4eede4189bba2c116d8074", "text": "The Web of Things (WoT) plays an important role in the representation of the objects connected to the Internet of Things in a more transparent and effective way. Thus, it enables seamless and ubiquitous web communication between users and the smart things. Considering the importance of WoT, we propose a WoT-based emerging sensor network (WoT-ESN), which collects data from sensors, routes sensor data to the web, and integrate smart things into the web employing a representational state transfer (REST) architecture. A smart home scenario is introduced to evaluate the proposed WoT-ESN architecture. The smart home scenario is tested through computer simulation of the energy consumption of various household appliances, device discovery, and response time performance. The simulation results show that the proposed scheme significantly optimizes the energy consumption of the household appliances and the response time of the appliances.", "title": "" }, { "docid": "65be3c4cf41f035e79fe0e968b8b5158", "text": "An efficient and analytical continuous-curvature path-smoothing algorithm, which fits an ordered sequence of waypoints generated by an obstacle-avoidance path planner, is proposed. The algorithm is based upon parametric cubic Bézier curves; thus, it is inherently closed-form in its expression, and the algorithm only requires the maximum curvature to be defined. The algorithm is, thus, computational efficient and easy to implement. Results show the effectiveness of the analytical algorithm in generating a continuous-curvature path, which satisfies an upper bound-curvature constraint, and that the path generated requires less control effort to track and minimizes control-input variability.", "title": "" }, { "docid": "41fccda132fae841c48c589f5c1cf69b", "text": "We present a conceptual framework for transfer in reinforcement learning based on the idea that related tasks share a common space. The framework attempts to capture the notion of tasks that are related (so that transfer is possible) but distinct (so that transfer is non-trivial). We define three types of transfer (knowledge, skill and model transfer) in terms of the framework, and illustrate them with an example scenario.", "title": "" }, { "docid": "37063598a4902435c1cb2142879b4094", "text": "Thermal/residual deformations and stresses in plastic integrated circuit (IC) packages caused by epoxy molding compound (EMC) during the manufacturing process are investigated experimentally (only for deformations), theoretically, and numerically. A real-time Twyman-Green interferometry is used for measuring the out-of-plane thermal and residual deformations of die/EMC bi-material specimens. Dynamic mechanical analysis (DMA) and thermomechanical analysis (TMA) are for characterizing thermomechanical properties of the EMC materials. A finite element model (FEM) and theory associated with experimental observations are employed for understanding the thermal/residual deformations and stresses of IC packages due to EMC encapsulation. It is shown that EMC materials must be fully cured so that the material properties are stable enough for applications. Experimental results show that the EMC material experiences stress relaxation due to its viscoelastic behavior during the post mold curing (PMC) process. As a result, the strains (stresses) resulted from the chemical shrinkage of the EMC curing could be relaxed during the PMC process, so that the chemical shrinkage has no effect on the residual strains (stresses) for the plastic packages being post cured. Compared with numerical and theoretical analyses, the experimental results have demonstrated that die/EMC bi-material structure at high temperature (above Tg) warps less than expected, as a result of viscoelastic stress relaxation of EMC at high temperature (during solder reflow process). Meanwhile, this stress relaxation can also cause shifting this zero-stress temperature to the higher one, so that the residual deformations (stresses) of die/EMC bi-material specimens were found to increase by about 40% after the solder reflow process. The residual and thermal stresses have been resolved by FEM and theoretical analyses. The results suggest that the pure bending stresses (without shear and peel stresses) of the bi-material specimens are only limited in the region from x= 0 (the center) to x= 0.75 L due to the free edge effects, but this region is shrunk down to x= 0.4L at 200degC. And the maximum warpage and bending stress per unit temperature change is occurred around 165degC (Tg of the EMC). This study has demonstrated that the Twyman-Green experiment with associated bi-material plate theory and FEM can provide a useful tool for studying the EMC-induce residual/thermal deformations and stresses during the IC packaging fabrication", "title": "" }, { "docid": "259972cd20a1f763b07bef4619dc7f70", "text": "This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.", "title": "" }, { "docid": "f6f1462e8edd8200948168423c87c1bf", "text": "Users' behaviors are driven by their preferences across various aspects of items they are potentially interested in purchasing, viewing, etc. Latent space approaches model these aspects in the form of latent factors. Although such approaches have been shown to lead to good results, the aspects that are important to different users can vary. In many domains, there may be a set of aspects for which all users care about and a set of aspects that are specific to different subsets of users. To explicitly capture this, we consider models in which there are some latent factors that capture the shared aspects and some user subset specific latent factors that capture the set of aspects that the different subsets of users care about.\n In particular, we propose two latent space models: rGLSVD and sGLSVD, that combine such a global and user subset specific sets of latent factors. The rGLSVD model assigns the users into different subsets based on their rating patterns and then estimates a global and a set of user subset specific local models whose number of latent dimensions can vary.\n The sGLSVD model estimates both global and user subset specific local models by keeping the number of latent dimensions the same among these models but optimizes the grouping of the users in order to achieve the best approximation. Our experiments on various real-world datasets show that the proposed approaches significantly outperform state-of-the-art latent space top-N recommendation approaches.", "title": "" }, { "docid": "68f422172815df9fff6bf515bf7ea803", "text": "Active learning (AL) promises to reduce the cost of annotating labeled datasets for trainable human language technologies. Contrary to expectations, when creating labeled training material for HPSG parse selection and latereusing it with other models, gains from AL may be negligible or even negative. This has serious implications for using AL, showing that additional cost-saving strategies may need to be adopted. We explore one such strategy: using a model during annotation to automate some of the decisions. Our best results show an 80% reduction in annotation cost compared with labeling randomly selected data with a single model.", "title": "" }, { "docid": "ae4ec9d443bb5db16934e7d90b9dd739", "text": "Quality of Service (QoS) -- based bandwidth allocation plays a key role in real-time computing systems and applications such as voice IP, teleconferencing, and gaming. Likewise, customer services often need to be distinguished according to their service priorities and requirements. In this paper, we consider bandwidth allocation in the networks of a cloud carrier in which cloud users' requests are processed and transferred by a cloud provider subject to QoS requirements. We present a QoS-guaranteed approach for bandwidth allocation that satisfies QoS requirements for all priority cloud users by using Open vSwitch, based on software defined networking (SDN). We implement and test the proposed approach on the Global Environment for Networking Innovations (GENI). Experimental results show the effectiveness of the proposed approach.", "title": "" }, { "docid": "e58d7f537b0d703fa1381eee2d721a34", "text": "BACKGROUND\nProvision of high quality transitional care is a challenge for health care providers in many western countries. This systematic review was conducted to (1) identify and synthesise research, using randomised control trial designs, on the quality of transitional care interventions compared with standard hospital discharge for older people with chronic illnesses, and (2) make recommendations for research and practice.\n\n\nMETHODS\nEight databases were searched; CINAHL, Psychinfo, Medline, Proquest, Academic Search Complete, Masterfile Premier, SocIndex, Humanities and Social Sciences Collection, in addition to the Cochrane Collaboration, Joanna Briggs Institute and Google Scholar. Results were screened to identify peer reviewed journal articles reporting analysis of quality indicator outcomes in relation to a transitional care intervention involving discharge care in hospital and follow-up support in the home. Studies were limited to those published between January 1990 and May 2013. Study participants included people 60 years of age or older living in their own homes who were undergoing care transitions from hospital to home. Data relating to study characteristics and research findings were extracted from the included articles. Two reviewers independently assessed studies for risk of bias.\n\n\nRESULTS\nTwelve articles met the inclusion criteria. Transitional care interventions reported in most studies reduced re-hospitalizations, with the exception of general practitioner and primary care nurse models. All 12 studies included outcome measures of re-hospitalization and length of stay indicating a quality focus on effectiveness, efficiency, and safety/risk. Patient satisfaction was assessed in six of the 12 studies and was mostly found to be high. Other outcomes reflecting person and family centred care were limited including those pertaining to the patient and carer experience, carer burden and support, and emotional support for older people and their carers. Limited outcome measures were reported reflecting timeliness, equity, efficiencies for community providers, and symptom management.\n\n\nCONCLUSIONS\nGaps in the evidence base were apparent in the quality domains of timeliness, equity, efficiencies for community providers, effectiveness/symptom management, and domains of person and family centred care. Further research that involves the person and their family/caregiver in transitional care interventions is needed.", "title": "" }, { "docid": "ebf5efac65fe9912b573843941ffa8cd", "text": "Objectives Despite the popularity of closed circuit television (CCTV), evidence of its crime prevention capabilities is inconclusive. Research has largely reported CCTV effect as ‘‘mixed’’ without explaining this variance. The current study contributes to the literature by testing the influence of several micro-level factors on changes in crime levels within CCTV areas of Newark, NJ. Methods Viewsheds, denoting the line-of-sight of CCTV cameras, were units of analysis (N = 117). Location quotients, controlling for viewshed size and control-area crime incidence, measured changes in the levels of six crime categories, from the pre-installation period to the post-installation period. Ordinary least squares regression models tested the influence of specific micro-level factors—environmental features, camera line-of-sight, enforcement activity, and camera design—on each crime category. Results First, the influence of environmental features differed across crime categories, with specific environs being related to the reduction of certain crimes and the increase of others. Second, CCTV-generated enforcement was related to the reduction of overall crime, violent crime and theft-from-auto. Third, obstructions to CCTV line-of-sight caused by immovable objects were related to increased levels of auto theft and decreased levels of violent crime, theft from auto and robbery. Conclusions The findings suggest that CCTV operations should be designed in a manner that heightens their deterrent effect. Specifically, police should account for the presence of crime generators/attractors and ground-level obstructions when selecting camera sites, and design the operational strategy in a manner that generates maximum levels of enforcement.", "title": "" }, { "docid": "dbff2130c480634608cddd8a9fea59cb", "text": "The presence of a physician seems to be beneficial for pre-hospital cardiopulmonary resuscitation (CPR) of patients with out-of-hospital cardiac arrest. However, the effectiveness of a physician's presence during CPR before hospital arrival has not been established. We conducted a prospective, non-randomized, observational study using national data from out-of-hospital cardiac arrests between 2005 and 2010 in Japan. We performed a propensity analysis and examined the association between a physician's presence during an ambulance car ride and short- and long-term survival from out-of-hospital cardiac arrest. Specifically, a full non-parsimonious logistic regression model was fitted with the physician presence in the ambulance as the dependent variable; the independent variables included all study variables except for endpoint variables plus dummy variables for the 47 prefectures in Japan (i.e., 46 variables). In total, 619,928 out-of-hospital cardiac arrest cases that met the inclusion criteria were analyzed. Among propensity-matched patients, a positive association was observed between a physician's presence during an ambulance car ride and return of spontaneous circulation (ROSC) before hospital arrival, 1-month survival, and 1-month survival with minimal neurological or physical impairment (ROSC: OR = 1.84, 95% CI 1.63-2.07, p = 0.00 in adjusted for propensity and all covariates); 1-month survival: OR = 1.29, 95% CI 1.04-1.61, p = 0.02 in adjusted for propensity and all covariates); cerebral performance category (1 or 2): OR = 1.54, 95% CI 1.03-2.29, p = 0.04 in adjusted for propensity and all covariates); and overall performance category (1 or 2): OR = 1.50, 95% CI 1.01-2.24, p = 0.05 in adjusted for propensity and all covariates). A prospective observational study using national data from out-of-hospital cardiac arrests shows that a physician's presence during an ambulance car ride was independently associated with increased short- and long-term survival.", "title": "" }, { "docid": "cf4a03657506f46baa934fa35cd84589", "text": "In this work we present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives state-of-the-art results for both.", "title": "" }, { "docid": "ae97effd4e999ccf580d32c8522b6f59", "text": "Eight isolates of cellulose-degrading bacteria (CDB) were isolated from four different invertebrates (termite, snail, caterpillar, and bookworm) by enriching the basal culture medium with filter paper as substrate for cellulose degradation. To indicate the cellulase activity of the organisms, diameter of clear zone around the colony and hydrolytic value on cellulose Congo Red agar media were measured. CDB 8 and CDB 10 exhibited the maximum zone of clearance around the colony with diameter of 45 and 50 mm and with the hydrolytic value of 9 and 9.8, respectively. The enzyme assays for two enzymes, filter paper cellulase (FPC), and cellulase (endoglucanase), were examined by methods recommended by the International Union of Pure and Applied Chemistry (IUPAC). The extracellular cellulase activities ranged from 0.012 to 0.196 IU/mL for FPC and 0.162 to 0.400 IU/mL for endoglucanase assay. All the cultures were also further tested for their capacity to degrade filter paper by gravimetric method. The maximum filter paper degradation percentage was estimated to be 65.7 for CDB 8. Selected bacterial isolates CDB 2, 7, 8, and 10 were co-cultured with Saccharomyces cerevisiae for simultaneous saccharification and fermentation. Ethanol production was positively tested after five days of incubation with acidified potassium dichromate.", "title": "" }, { "docid": "6a470404c36867a18a98fafa9df6848f", "text": "Memory links use variable-impedance drivers, feed-forward equalization (FFE) [1], on-die termination (ODT) and slew-rate control to optimize the signal integrity (SI). An asymmetric DRAM link configuration exploits the availability of a fast CMOS technology on the memory controller side to implement powerful equalization, while keeping the circuit complexity on the DRAM side relatively simple. This paper proposes the use of Tomlinson Harashima precoding (THP) [2-4] in a memory controller as replacement of the afore-mentioned SI optimization techniques. THP is a transmitter equalization technique in which post-cursor inter-symbol interference (ISI) is cancelled by means of an infinite impulse response (IIR) filter with modulo-based amplitude limitation; similar to a decision feedback equalizer (DFE) on the receive side. However, in contrast to a DFE, THP does not suffer from error propagation.", "title": "" }, { "docid": "6ae289d7da3e923c1288f39fd7a162f6", "text": "The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses.", "title": "" }, { "docid": "10584d580f626fe5937dd3855a7be987", "text": "This paper presents virtual asymmetric multiprocessor, a new scheme of virtual desktop scheduling on multi-core processors for user-interactive performance. The proposed scheme enables virtual CPUs to be dynamically performance-asymmetric based on their hosted workloads. To enhance user experience on consolidated desktops, our scheme provides interactive workloads with fast virtual CPUs, which have more computing power than those hosting background workloads in the same virtual machine. To this end, we devise a hypervisor extension that transparently classifies background tasks from potentially interactive workloads. In addition, we introduce a guest extension that manipulates the scheduling policy of an operating system in favor of our hypervisor-level scheme so that interactive performance can be further improved. Our evaluation shows that the proposed scheme significantly improves interactive performance of application launch, Web browsing, and video playback applications when CPU-intensive workloads highly disturb the interactive workloads.", "title": "" } ]
scidocsrr
b79de242430004bb86f43fdb4f2c74b2
ExprGAN: Facial Expression Editing With Controllable Expression Intensity
[ { "docid": "4424a73177671ce5f1abcd304e546434", "text": "Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoderdecoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-theart results on large pose face recognition.", "title": "" } ]
[ { "docid": "011ff2d5995a46a686d9edb80f33b8ca", "text": "In the era of Social Computing, the role of customer reviews and ratings can be instrumental in predicting the success and sustainability of businesses as customers and even competitors use them to judge the quality of a business. Yelp is one of the most popular websites for users to write such reviews. This rating can be subjective and biased toward user's personality. Business preferences of a user can be decrypted based on his/ her past reviews. In this paper, we deal with (i) uncovering latent topics in Yelp data based on positive and negative reviews using topic modeling to learn which topics are the most frequent among customer reviews, (ii) sentiment analysis of users' reviews to learn how these topics associate to a positive or negative rating which will help businesses improve their offers and services, and (iii) predicting unbiased ratings from user-generated review text alone, using Linear Regression model. We also perform data analysis to get some deeper insights into customer reviews.", "title": "" }, { "docid": "1572891f4c2ab064c6d6a164f546e7c1", "text": "BACKGROUND Unexplained gastrointestinal (GI) symptoms and joint hypermobility (JHM) are common in the general population, the latter described as benign joint hypermobility syndrome (BJHS) when associated with musculo-skeletal symptoms. Despite overlapping clinical features, the prevalence of JHM or BJHS in patients with functional gastrointestinal disorders has not been examined. METHODS The incidence of JHM was evaluated in 129 new unselected tertiary referrals (97 female, age range 16-78 years) to a neurogastroenterology clinic using a validated 5-point questionnaire. A rheumatologist further evaluated 25 patients with JHM to determine the presence of BJHS. Groups with or without JHM were compared for presentation, symptoms and outcomes of relevant functional GI tests. KEY RESULTS Sixty-three (49%) patients had evidence of generalized JHM. An unknown aetiology for GI symptoms was significantly more frequent in patients with JHM than in those without (P < 0.0001). The rheumatologist confirmed the clinical impression of JHM in 23 of 25 patients, 17 (68%) of whom were diagnosed with BJHS. Patients with co-existent BJHS and GI symptoms experienced abdominal pain (81%), bloating (57%), nausea (57%), reflux symptoms (48%), vomiting (43%), constipation (38%) and diarrhoea (14%). Twelve of 17 patients presenting with upper GI symptoms had delayed gastric emptying. One case is described in detail. CONCLUSIONS & INFERENCES In a preliminary retrospective study, we have found a high incidence of JHM in patients referred to tertiary neurogastroenterology care with unexplained GI symptoms and in a proportion of these a diagnosis of BJHS is made. Symptoms and functional tests suggest GI dysmotility in a number of these patients. The possibility that a proportion of patients with unexplained GI symptoms and JHM may share a common pathophysiological disorder of connective tissue warrants further investigation.", "title": "" }, { "docid": "15cb7023c175e2c92cd7b392205fb87f", "text": "Feedback has a strong influence on effective learning from computer-based instruction. Prior research on feedback in computer-based instruction has mainly focused on static feedback schedules that employ the same feedback schedule throughout an instructional session. This study examined transitional feedback schedules in computer-based multimedia instruction on procedural problem-solving in electrical circuit analysis. Specifically, we compared two transitional feedback schedules: the TFS-P schedule switched from initial feedback after each problem step to feedback after a complete problem at later learning states; the TFP-S schedule transitioned from feedback after a complete problem to feedback after each problem step. As control conditions, we also considered two static feedback schedules, namely providing feedback after each practice problem-solving step (SFS) or providing feedback after attempting a complete multi-step practice problem (SFP). Results indicate that the static stepwise (SFS) and transitional stepwise to problem (TFS-P) feedback produce higher problem solving near-transfer post-test performance than static problem (SFP) and transitional problem to step (TFP-S) feedback. Also, TFS-P resulted in higher ratings of program liking and feedback helpfulness than TFP-S. Overall, the study results indicate benefits of maintaining high feedback frequency (SFS) and reducing feedback frequency (TFS-P) compared to low feedback frequency (SFP) or increasing feedback frequency (TFP-S) as novice learners acquire engineering problem solving skills. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "323d633995296611c903874aefa5cdb7", "text": "This paper investigates the possibility of communicating through vibrations. By modulating the vibration motors available in all mobile phones, and decoding them through accelerometers, we aim to communicate small packets of information. Of course, this will not match the bit rates available through RF modalities, such as NFC or Bluetooth, which utilize a much larger bandwidth. However, where security is vital, vibratory communication may offer advantages. We develop Ripple, a system that achieves up to 200 bits/s of secure transmission using off-the-shelf vibration motor chips, and 80 bits/s on Android smartphones. This is an outcome of designing and integrating a range of techniques, including multicarrier modulation, orthogonal vibration division, vibration braking, side-channel jamming, etc. Not all these techniques are novel; some are borrowed and suitably modified for our purposes, while others are unique to this relatively new platform of vibratory communication.", "title": "" }, { "docid": "2fc2234e6f8f70e0b12f1f72b1d21175", "text": "Servers and HPC systems often use a strong memory error correction code, or ECC, to meet their reliability and availability requirements. However, these ECCs often require significant capacity and/or power overheads. We observe that since memory channels are independent from one another, error correction typically needs to be performed for one channel at a time. Based on this observation, we show that instead of always storing in memory the actual ECC correction bits as do existing systems, it is sufficient to store the bitwise parity of the ECC correction bits of different channels for fault-free memory regions, and store the actual ECC correction bits only for faulty memory regions. By trading off the resultant ECC capacity overhead reduction for improved memory energy efficiency, the proposed technique reduces memory energy per instruction by 54.4% and 20.6%, respectively, compared to a commercial chipkill correct ECC and a DIMM-kill correct ECC, while incurring similar or lower capacity overheads.", "title": "" }, { "docid": "7ee31d080b3cd7632c25c22b378e6d91", "text": "Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in general. So SGD does perform variational inference, but for a different loss than the one used to compute the gradients. Even more surprisingly, SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components. We prove that such “out-of-equilibrium” behavior is a consequence of highly nonisotropic gradient noise in SGD; the covariance matrix of mini-batch gradients for deep networks has a rank as small as 1% of its dimension. We provide extensive empirical validation of these claims. This article summarizes the findings in [1]. See the longer version for background, detailed results and proofs.", "title": "" }, { "docid": "7056b8e792a2bd1535cf020b2aeab2c7", "text": "The authors propose a theoretical model linking achievement goals and achievement emotions to academic performance. This model was tested in a prospective study with undergraduates (N 213), using exam-specific assessments of both goals and emotions as predictors of exam performance in an introductory-level psychology course. The findings were consistent with the authors’ hypotheses and supported all aspects of the proposed model. In multiple regression analysis, achievement goals (mastery, performance approach, and performance avoidance) were shown to predict discrete achievement emotions (enjoyment, boredom, anger, hope, pride, anxiety, hopelessness, and shame), achievement emotions were shown to predict performance attainment, and 7 of the 8 focal emotions were documented as mediators of the relations between achievement goals and performance attainment. All of these findings were shown to be robust when controlling for gender, social desirability, positive and negative trait affectivity, and scholastic ability. The results are discussed with regard to the underdeveloped literature on discrete achievement emotions and the need to integrate conceptual and applied work on achievement goals and achievement emotions.", "title": "" }, { "docid": "88e72e039de541b00722901a8eff7d19", "text": "When building agents and synthetic characters, and in order to achieve believability, we must consider the emotional relations established between users and characters, that is, we must consider the issue of \"empathy\". Defined in broad terms as \"An observer reacting emotionally because he perceives that another is experiencing or about to experience an emotion\", empathy is an important element to consider in the creation of relations between humans and agents. In this paper we will focus on the role of empathy in the construction of synthetic characters, providing some requirements for such construction and illustrating the presented concepts with a specific system called FearNot!. FearNot! was developed to address the difficult and often devastating problem of bullying in schools. By using role playing and empathic synthetic characters in a 3D environment, FearNot! allows children from 8 to 12 to experience a virtual scenario where they can witness (in a third-person perspective) bullying situations. To build empathy into FearNot! we have considered the following components: agentýs architecture; the charactersý embodiment and emotional expression; proximity with the user and emotionally charged situations.We will describe how these were implemented in FearNot! and report on the preliminary results we have with it.", "title": "" }, { "docid": "63cedd9ee8958ad27668b606921ac100", "text": "Stein kernel (SK) has recently shown promising performance on classifying images represented by symmetric positive definite (SPD) matrices. It evaluates the similarity between two SPD matrices through their eigenvalues. In this paper, we argue that directly using the original eigenvalues may be problematic because: 1) eigenvalue estimation becomes biased when the number of samples is inadequate, which may lead to unreliable kernel evaluation, and 2) more importantly, eigenvalues reflect only the property of an individual SPD matrix. They are not necessarily optimal for computing SK when the goal is to discriminate different classes of SPD matrices. To address the two issues, we propose a discriminative SK (DSK), in which an extra parameter vector is defined to adjust the eigenvalues of input SPD matrices. The optimal parameter values are sought by optimizing a proxy of classification performance. To show the generality of the proposed method, three kernel learning criteria that are commonly used in the literature are employed as a proxy. A comprehensive experimental study is conducted on a variety of image classification tasks to compare the proposed DSK with the original SK and other methods for evaluating the similarity between SPD matrices. The results demonstrate that the DSK can attain greater discrimination and better align with classification tasks by altering the eigenvalues. This makes it produce higher classification performance than the original SK and other commonly used methods.", "title": "" }, { "docid": "36190ca28bff2390c9037404bda2cd5f", "text": "In this paper we propose an approach to modeling syntactically-motivated skeletal structure of source sentence for machine translation. This model allows for application of high-level syntactic transfer rules and low-level non-syntactic rules. It thus involves fully syntactic, non-syntactic, and partially syntactic derivations via a single grammar and decoding paradigm. On large-scale Chinese-English and EnglishChinese translation tasks, we obtain an average improvement of +0.9 BLEU across the newswire and web genres.", "title": "" }, { "docid": "496ba5ee48281afe48b5afce02cc4dbf", "text": "OBJECTIVE\nThis study examined the relationship between reported exposure to child abuse and a history of parental substance abuse (alcohol and drugs) in a community sample in Ontario, Canada.\n\n\nMETHOD\nThe sample consisted of 8472 respondents to the Ontario Mental Health Supplement (OHSUP), a comprehensive population survey of mental health. The association of self-reported retrospective childhood physical and sexual abuse and parental histories of drug or alcohol abuse was examined.\n\n\nRESULTS\nRates of physical and sexual abuse were significantly higher, with a more than twofold increased risk among those reporting parental substance abuse histories. The rates were not significantly different between type or severity of abuse. Successively increasing rates of abuse were found for those respondents who reported that their fathers, mothers or both parents had substance abuse problems; this risk was significantly elevated for both parents compared to father only with substance abuse problem.\n\n\nCONCLUSIONS\nParental substance abuse is associated with a more than twofold increase in the risk of exposure to both childhood physical and sexual abuse. While the mechanism for this association remains unclear, agencies involved in child protection or in treatment of parents with substance abuse problems must be cognizant of this relationship and focus on the development of interventions to serve these families.", "title": "" }, { "docid": "1ade3a53c754ec35758282c9c51ced3d", "text": "Radical hysterectomy represents the treatment of choice for FIGO stage IA2–IIA cervical cancer. It is associated with several serious complications such as urinary and anorectal dysfunction due to surgical trauma to the autonomous nervous system. In order to determine those surgical steps involving the risk of nerve injury during both classical and nerve-sparing radical hysterectomy, we investigated the relationships between pelvic fascial, vascular and nervous structures in a large series of embalmed and fresh female cadavers. We showed that the extent of potential denervation after classical radical hysterectomy is directly correlated with the radicality of the operation. The surgical steps that carry a high risk of nerve injury are the resection of the uterosacral and vesicouterine ligaments and of the paracervix. A nerve-sparing approach to radical hysterectomy for cervical cancer is feasible if specific resection limits, such as the deep uterine vein, are carefully identified and respected. However, a nerve-sparing surgical effort should be balanced with the oncological priorities of removal of disease and all its potential routes of local spread. L'hystérectomie radicale est le traitement de choix pour les cancers du col utérin de stade IA2–IIA de la Fédération Internationale de Gynécologie Obstétrique (FIGO). Cette intervention comporte plusieurs séquelles graves, telles que les dysfonctions urinaires ou ano-rectales, par traumatisme chirurgical des nerfs végétatifs pelviens. Pour mettre en évidence les temps chirurgicaux impliquant un risque de lésion nerveuse lors d'une hystérectomie radicale classique et avec préservation nerveuse, nous avons recherché les rapports entre le fascia pelvien, les structures vasculaires et nerveuses sur une large série de sujets anatomiques féminins embaumés et non embaumés. Nous avons montré que l'étendue de la dénervation potentielle après hystérectomie radicale classique était directement en rapport avec le caractère radical de l'intervention. Les temps chirurgicaux à haut risque pour des lésions nerveuses sont la résection des ligaments utéro-sacraux, des ligaments vésico-utérins et du paracervix. L'hystérectomie radicale avec préservation nerveuse est possible si des limites de résection spécifiques telle que la veine utérine profonde sont soigneusement identifiées et respectées. Cependant une chirurgie de préservation nerveuse doit être mise en balance avec les priorités carcinologiques d'exérèse du cancer et de toutes ses voies potentielles de dissémination locale.", "title": "" }, { "docid": "98b4703412d1c8ccce22ea6fb05d73bf", "text": "Clinical evaluation of scapular dyskinesis (SD) aims to identify abnormal scapulothoracic movement, underlying causal factors, and the potential relationship with shoulder symptoms. The literature proposes different methods of dynamic clinical evaluation of SD, but improved reliability and agreement values are needed. The present study aimed to evaluate the intrarater and interrater agreement and reliability of three SD classifications: 1) 4-type classification, 2) Yes/No classification, and 3) scapular dyskinesis test (SDT). Seventy-five young athletes, including 45 men and 30 women, were evaluated. Raters evaluated the SD based on the three methods during one series of 8-10 cycles (at least eight and maximum of ten) of forward flexion and abduction with an external load under the observation of two raters trained to diagnose SD. The evaluation protocol was repeated after 3 h for intrarater analysis. The agreement percentage was calculated by dividing the observed agreement by the total number of observations. Reliability was calculated using Cohen Kappa coefficient, with a 95% confidence interval (CI), defined by Kappa coefficient ±1.96 multiplied by the measurement standard error. The interrater analyses showed an agreement percentage between 80% and 95.9% and an almost perfect reliability (k>0.81) for the three classification methods in all the test conditions, except the 4-type and SDT classification methods, which had substantial reliability (k<0.80) in shoulder abduction. Intrarater analyses showed agreement percentages between 80.7% and 89.3% and substantial reliability (0.67 to 0.81) for both raters in the three classifications. CIs ranged from moderate to almost perfect categories. This indicates that the three SD classification methods investigated in this study showed high reliability values for both intrarater and interrater evaluation throughout a protocol that provided SD evaluation training of raters and included several repetitions of arm movements with external load during a live assessment.", "title": "" }, { "docid": "1cecb4765c865c0f44c76f5ed2332c13", "text": "Speaker indexing or diarization is an important task in audio processing and retrieval. Speaker diarization is the process of labeling a speech signal with labels corresponding to the identity of speakers. This paper includes a comprehensive review on the evolution of the technology and different approaches in speaker indexing and tries to offer a fully detailed discussion on these approaches and their contributions. This paper reviews the most common features for speaker diarization in addition to the most important approaches for speech activity detection (SAD) in diarization frameworks. Two main tasks of speaker indexing are speaker segmentation and speaker clustering. This paper includes a separate review on the approaches proposed for these subtasks. However, speaker diarization systems which combine the two tasks in a unified framework are also introduced in this paper. Another discussion concerns the approaches for online speaker indexing which has fundamental differences with traditional offline approaches. Other parts of this paper include an introduction on the most common performance measures and evaluation datasets. To conclude this paper, a complete framework for speaker indexing is proposed, which is aimed to be domain independent and parameter free and applicable for both online and offline applications. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "430bfb1ae136a7d886b4c96c455ddc59", "text": "We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially with depth but not width. We prove this generic class of deep random functions cannot be efficiently computed by any shallow network, going beyond prior work restricted to the analysis of single functions. Moreover, we formalize and quantitatively demonstrate the long conjectured idea that deep networks can disentangle highly curved manifolds in input space into flat manifolds in hidden space. Our theoretical analysis of the expressive power of deep networks broadly applies to arbitrary nonlinearities, and provides a quantitative underpinning for previously abstract notions about the geometry of deep functions.", "title": "" }, { "docid": "5ee593925b819f92f425ccc99c836b8d", "text": "This paper proposes an area-efficient fully resistor-string digital-to-analog-converter (R-DAC)-based thin-film transistor liquid crystal display (TFT-LCD) column driver in which DACs supply only negative-polarity voltages, while polarity inverters generate positive-polarity voltages from negative-polarity voltages. An offset cancellation technique is employed in negative-polarity buffers and polarity inverters. An experimental prototype 8-bit column driver was implemented using 0.35-m CMOS technology to verify the proposed driving scheme. The settling time is within 5 s, and the maximum deviation is 0.68 LSB. The average area per channel is 0.04 mm2. Compared with a conventional fully R-DAC-based TFT-LCD column driver, the proposed driving scheme has a DAC area saving of 41%.", "title": "" }, { "docid": "ed5a17f62e4024727538aba18f39fc78", "text": "The extent to which people can focus attention in the face of irrelevant distractions has been shown to critically depend on the level and type of information load involved in their current task. The ability to focus attention improves under task conditions of high perceptual load but deteriorates under conditions of high load on cognitive control processes such as working memory. I review recent research on the effects of load on visual awareness and brain activity, including changing effects over the life span, and I outline the consequences for distraction and inattention in daily life and in clinical populations.", "title": "" }, { "docid": "4d502d1fbcdc5ea30bf54b43daa33352", "text": "This paper investigates linearity enhancements in GaN based Doherty power amplifiers (DPA) with the implementation of forward gate current blocking. Using a simple p-n diode to limit gate current, both open loop and digitally pre-distorted (DPD) linearity for wideband, high peak to average ratio modulated signals, such as LTE, are improved. Forward gate current blocking (FCB) is compatible with normally-on III-V HEMT technology where positive gate current is observed which results in nonlinear operation of RF transistor. By blocking positive gate current, waveform clipping is mitigated at the device gate node. Consequently, through dynamic biasing, the effective gate bias at the transistor input is adjusted limiting the RF input signal peaks entering the non-linear regime of the gate Schottky diode inherent to GaN devices. The proposed technique demonstrates more than a 3 dBc improvement in DPD corrected linearity in adjacent channels when four 20 MHz LTE carriers are applied.", "title": "" }, { "docid": "c3c0de7f448c08ff8316ac2caed78b87", "text": "Wearable robots, i.e. active orthoses, exoskeletons, and mechatronic prostheses, represent a class of biomechatronic systems posing severe constraints in terms of safety and controllability. Additionally, whenever the worn system is required to establish a well-tuned dynamic interaction with the human body, in order to exploit emerging dynamical behaviours, the possibility of having modular joints, able to produce a controllable viscoelastic behaviour, becomes crucial. Controllability is a central issue in wearable robotics applications, because it impacts robot safety and effectiveness. Under this regard, DC motors offer very good performances, provided that a proper mounting scheme is used in order to mimic the typical viscoelastici behaviour exhibited by biological systems, as required by the selected application. In this paper we report on the design of two compact devices for controlling the active and passive torques applied to the joint of a wearable robot for the lower limbs. The first device consists of a rotary Serial Elastic Actuator (SEA), incorporating a custom made torsion spring. The second device is a purely mechanical passive viscoelastici joint, functionally equivalent to a torsion spring mounted in parallel to a rotary viscous damper. The torsion stiffness and the damping coefficient can be easily tuned by acting on specific elements, thanks to the modular design of the device. The working principles and basic design choices regarding the overall architectures and the single components are presented and discussed.", "title": "" }, { "docid": "e8a2a052078633adbb613e7898428c69", "text": "Human iris is considered a reliable and accurate modality for biometric recognition due to its unique texture information. However, similar to other biometric modalities, iris recognition systems are also vulnerable to presentation attacks (commonly called spoofing) that attempt to conceal or impersonate identity. Examples of typical iris spoofing attacks are printed iris images, textured contact lenses, and synthetic creation of iris images. It is critical to note that majority of the algorithms proposed in the literature are trained to handle a specific type of spoofing attack. These algorithms usually perform very well on that particular attack. However, in real-world applications, an attacker may perform different spoofing attacks. In such a case, the problem becomes more challenging due to inherent variations in different attacks. In this paper, we focus on a medley of iris spoofing attacks and present a unified framework for detecting such attacks. We propose a novel structural and textural feature based iris spoofing detection framework (DESIST). Multi-order dense Zernike moments are calculated across the iris image which encode variations in structure of the iris image. Local Binary Pattern with Variance (LBPV) is utilized for representing textural changes in a spoofed iris image. The highest classification accuracy of 82.20% is observed by the proposed framework for detecting normal and spoofed iris images on a combined iris spoofing database.", "title": "" } ]
scidocsrr
4c0c6cb550fac4462f26f20bee4b9e0a
Iris localization in frontal eye images for less constrained iris recognition systems
[ { "docid": "e8eab2f5481f10201bc82b7a606c1540", "text": "This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may affect the iris. We also suggest a short list of recommended readings for someone new to the field to quickly grasp the big picture of iris biometrics.", "title": "" }, { "docid": "e02a7947c8ffb6fc6abeb2854ef2afd7", "text": "This paper examines automated iris recognition as a biometri· ca/ly based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular, the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remate examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.", "title": "" } ]
[ { "docid": "d071c70b85b10a62538d73c7272f5d99", "text": "The Amaryllidaceae alkaloids represent a large (over 300 alkaloids have been isolated) and still expanding group of biogenetically related isoquinoline alkaloids that are found exclusively in plants belonging to this family. In spite of their great variety of pharmacological and/or biological properties, only galanthamine is used therapeutically. First isolated from Galanthus species, this alkaloid is a long-acting, selective, reversible and competitive inhibitor of acetylcholinesterase, and is used for the treatment of Alzheimer’s disease. Other Amaryllidaceae alkaloids of pharmacological interest will also be described in this chapter.", "title": "" }, { "docid": "e6b92ef03e801af68cb2660e6ff74902", "text": "In the past two decades, there has been much interest in applying neural networks to financial time series forecasting. Yet, there has been relatively little attention paid to selecting the input features for training these networks. This paper presents a novel CARTMAP neural network based on Adaptive Resonance Theory that incorporates automatic, intuitive, transparent, and parsimonious feature selection with fast learning. On average, over three separate 4-year simulations spanning 2004–2009 of Dow Jones Industrial Average stocks, CARTMAP outperformed related and classical alternatives. The alternatives were an industry standard random walk, a regression model, a general purpose ARTMAP, and ARTMAP with stepwise feature selection. This paper also discusses why the novel feature selection scheme outperforms the alternatives and how it can represent a step toward more transparency in financial modeling.", "title": "" }, { "docid": "f0285873e91d0470e8fbd8ce4430742f", "text": "Copying an element from a photo and pasting it into a painting is a challenging task. Applying photo compositing techniques in this context yields subpar results that look like a collage — and existing painterly stylization algorithms, which are global, perform poorly when applied locally. We address these issues with a dedicated algorithm that carefully determines the local statistics to be transferred. We ensure both spatial and inter-scale statistical consistency and demonstrate that both aspects are key to generating quality results. To cope with the diversity of abstraction levels and types of paintings, we introduce a technique to adjust the parameters of the transfer depending on the painting. We show that our algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve. CCS Concepts •Computing methodologies → Image processing;", "title": "" }, { "docid": "6b6805fa87d31f374a1db8da8acc2163", "text": "BACKGROUND\nWhile Web-based interventions can be efficacious, engaging a target population's attention remains challenging. We argue that strategies to draw such a population's attention should be tailored to meet its needs. Increasing user engagement in online suicide intervention development requires feedback from this group to prevent people who have suicide ideation from seeking treatment.\n\n\nOBJECTIVE\nThe goal of this study was to solicit feedback on the acceptability of the content of messaging from social media users with suicide ideation. To overcome the common concern of lack of engagement in online interventions and to ensure effective learning from the message, this research employs a customized design of both content and length of the message.\n\n\nMETHODS\nIn study 1, 17 participants suffering from suicide ideation were recruited. The first (n=8) group conversed with a professional suicide intervention doctor about its attitudes and suggestions for a direct message intervention. To ensure the reliability and consistency of the result, an identical interview was conducted for the second group (n=9). Based on the collected data, questionnaires about this intervention were formed. Study 2 recruited 4222 microblog users with suicide ideation via the Internet.\n\n\nRESULTS\nThe results of the group interviews in study 1 yielded little difference regarding the interview results; this difference may relate to the 2 groups' varied perceptions of direct message design. However, most participants reported that they would be most drawn to an intervention where they knew that the account was reliable. Out of 4222 microblog users, we received responses from 725 with completed questionnaires; 78.62% (570/725) participants were not opposed to online suicide intervention and they valued the link for extra suicide intervention information as long as the account appeared to be trustworthy. Their attitudes toward the intervention and the account were similar to those from study 1, and 3 important elements were found pertaining to the direct message: reliability of account name, brevity of the message, and details of the phone numbers of psychological intervention centers and psychological assessment.\n\n\nCONCLUSIONS\nThis paper proposed strategies for engaging target populations in online suicide interventions.", "title": "" }, { "docid": "2753e0a54d1a58993fcdd79ee40f0aac", "text": "This study investigated the effectiveness of the WAIS-R Block Design subtest to predict everyday spatial ability for 65 university undergraduates (15 men, 50 women) who were administered Block Design, the Standardized Road Map Test of Direction Sense, and the Everyday Spatial Activities Test. In addition, the verbally loaded National Adult Reading Test was administered to assess whether the more visuospatial Block Design subtest was a better predictor of spatial ability. Moderate support was found. When age and sex were accounted for, Block Design accounted for 36% of the variance in performance (r = -.62) on the Road Map Test and 19% of the variance on the performance of the Everyday Spatial Activities Test (r = .42). In contrast, the scores on the National Adult Reading Test did not predict performance on the Road Map Test or Everyday Spatial Abilities Test. This suggests that, with appropriate caution, Block Design could be used as a measure of everyday spatial abilities.", "title": "" }, { "docid": "bec66d4d576f2c5c5643ffe4b72ab353", "text": "Many cities suffer from noise pollution, which compromises people's working efficiency and even mental health. New York City (NYC) has opened a platform, entitled 311, to allow people to complain about the city's issues by using a mobile app or making a phone call; noise is the third largest category of complaints in the 311 data. As each complaint about noises is associated with a location, a time stamp, and a fine-grained noise category, such as \"Loud Music\" or \"Construction\", the data is actually a result of \"human as a sensor\" and \"crowd sensing\", containing rich human intelligence that can help diagnose urban noises. In this paper we infer the fine-grained noise situation (consisting of a noise pollution indicator and the composition of noises) of different times of day for each region of NYC, by using the 311 complaint data together with social media, road network data, and Points of Interests (POIs). We model the noise situation of NYC with a three dimension tensor, where the three dimensions stand for regions, noise categories, and time slots, respectively. Supplementing the missing entries of the tensor through a context-aware tensor decomposition approach, we recover the noise situation throughout NYC. The information can inform people and officials' decision making. We evaluate our method with four real datasets, verifying the advantages of our method beyond four baselines, such as the interpolation-based approach.", "title": "" }, { "docid": "eb7d30ac4c490c4aa830d01053efcfda", "text": "In recent decades, ICT curriculum in K-10 has typically focussed on ICT as a tool, with the development of digital literacy being the key requirement. Areas such as computer science (CS) or computational thinking (CT) were typically isolated into senior secondary programs, with a focus on programming and algorithm development, when they were considered at all. New curricula introduced in England, and currently awaiting minister endorsement within Australia, have identified the need to educate for both digital literacy and CS, and the need to promote both from the commencement of schooling. This has presented significant challenges for teachers within this space, as they generally do not have the disciplinary knowledge to teach new computing curriculum and pedagogy in the early years is currently underdeveloped. In this paper, we introduce the CSER Digital Technologies MOOC, assisting teachers in the development of the fundamental knowledge of CT and the Australian Digital Technologies curriculum component. We describe our course structure, and key mechanisms for building a learning community within a MOOC context. We identify key challenges that teachers have identified in mastering this new curriculum, highlighting areas of future research in the teaching and learning of CT in K-6.", "title": "" }, { "docid": "a6a55ff4f72abce0c56986e8a44df2da", "text": "Antibodies are important therapeutic agents for cancer. Recently, it has become clear that antibodies possess several clinically relevant mechanisms of action. Many clinically useful antibodies can manipulate tumour-related signalling. In addition, antibodies exhibit various immunomodulatory properties and, by directly activating or inhibiting molecules of the immune system, antibodies can promote the induction of antitumour immune responses. These immunomodulatory properties can form the basis for new cancer treatment strategies.", "title": "" }, { "docid": "825640f8ce425a34462b98869758e289", "text": "Recurrent neural networks scale poorly due to the intrinsic difficulty in parallelizing their state computations. For instance, the forward pass computation of ht is blocked until the entire computation of ht−1 finishes, which is a major bottleneck for parallel computing. In this work, we propose an alternative RNN implementation by deliberately simplifying the state computation and exposing more parallelism. The proposed recurrent unit operates as fast as a convolutional layer and 5-10x faster than cuDNN-optimized LSTM. We demonstrate the unit’s effectiveness across a wide range of applications including classification, question answering, language modeling, translation and speech recognition. We open source our implementation in PyTorch and CNTK1.", "title": "" }, { "docid": "9e347b3fe360e138328ebb1ece61945f", "text": "This paper discusses the operational characteristics of the topologies for hybrid electric vehicles (HEV), fuel cell vehicles (FCV), and more electric vehicles (MEV). A brief description of series hybrid, parallel hybrid, and fuel cell-based propulsion systems are presented. The paper also presents fuel cell propulsion applications, more specific to light-duty passenger cars as well as heavy-duty buses. Finally, some of the major fundamental issues that currently face these advanced vehicular technologies including the challenges for market penetration are highlighted.", "title": "" }, { "docid": "330bbffaefd9f5d165b8eca16db1f991", "text": "1 Pharmacist, Professor, and Researcher at the College of Pharmacy at the Federal Fluminense University. 2 Physiatric Doctor at the Institute of Instituto de Medicina Física e Reabilitação do Hospital da Clínicas da Faculdade de Medicina da Universidade de São Paulo (Physical Medicine and Rehabilitation at the Hospital of the Clinics of the College of Medicine of the University of São Paulo). Coordinator of Teaching and Research of the Instituto Brasil de Tecnologias da Saúde (Brazilian Institute of Health Technologies). 3 Orthopediatric Doctor and Physiatrist, CSO of the Instituto Brasil de Tecnologias da Saúde (Brazilian Institute of Health Technologies). Peripheral vascular diseases (PVDS) are characterized as a circulation problem in the veins, arteries, and lymphatic system. The main therapy consists of changes in lifestyle such as diet and physical activity. The pharmacological therapy includes the use of vasoactive drugs, which are used in arteriopathies and venolymphatic disorders. The goal of this study was to research the scientific literature on the use and pharmacology of vasoactive drugs, emphasizing the efficacy of their local actions and administration.", "title": "" }, { "docid": "b09eedfc1b27d5666846c18423d1ad54", "text": "Recent years have seen many significant advances in program comprehension and software maintenance automation technology. In spite of the enormous potential savings in software maintenance costs, for the most part adoption of these ideas in industry remains at the experimental prototype stage. In this paper I explore some of the practical reasons for industrial resistance to adoption of software maintenance automation. Based on the experience of six years of software maintenance automation services to the financial industry involving more than 4.5 Gloc of code at Legasys Corporation, I discuss some of the social, technical and business realities that lie at the root of this resistance, outline various Legasys attempts overcome these barriers, and suggest some approaches to software maintenance automation that may lead to higher levels of industrial acceptance in the future.", "title": "" }, { "docid": "73b85a4948faf5b4d9a6a4019d3048ea", "text": "In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.", "title": "" }, { "docid": "6d3410de121ffe037eafd5f30daa7252", "text": "One of the more important issues in the development of larger scale complex systems (product development period of two or more years) is accommodating changes to requirements. Requirements gathered for larger scale systems evolve during lengthy development periods due to changes in software and business environments, new user needs and technological advancements. Agile methods, which focus on accommodating change even late in the development lifecycle, can be adopted for the development of larger scale systems. However, as currently applied, these practices are not always suitable for the development of such systems. We propose a soft-structured framework combining the principles of agile and conventional software development that addresses the issue of rapidly changing requirements for larger scale systems. The framework consists of two parts: (1) a soft-structured requirements gathering approach that reflects the agile philosophy i.e., the Agile Requirements Generation Model and (2) a tailored development process that can be applied to either small or larger scale systems.", "title": "" }, { "docid": "fc1009e9515d83166e97e4e01ae9ca69", "text": "In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the \"one-shot-learning\" competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for \"user independent\" gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.", "title": "" }, { "docid": "ed9beb7f6ffc65439f34294dec11a966", "text": "CONTEXT\nA variety of ankle self-stretching exercises have been recommended to improve ankle-dorsiflexion range of motion (DFROM) in individuals with limited ankle dorsiflexion. A strap can be applied to stabilize the talus and facilitate anterior glide of the distal tibia at the talocrural joint during ankle self-stretching exercises. Novel ankle self-stretching using a strap (SSS) may be a useful method of improving ankle DFROM.\n\n\nOBJECTIVE\nTo compare the effects of 2 ankle-stretching techniques (static stretching versus SSS) on ankle DFROM.\n\n\nDESIGN\nRandomized controlled clinical trial.\n\n\nSETTING\nUniversity research laboratory.\n\n\nPATIENTS OR OTHER PARTICIPANTS\nThirty-two participants with limited active dorsiflexion (<20°) while sitting (14 women and 18 men) were recruited.\n\n\nMAIN OUTCOME MEASURE(S)\nThe participants performed 2 ankle self-stretching techniques (static stretching and SSS) for 3 weeks. Active DFROM (ADFROM), passive DFROM (PDFROM), and the lunge angle were measured. An independent t test was used to compare the improvements in these values before and after the 2 stretching interventions. The level of statistical significance was set at α = .05.\n\n\nRESULTS\nActive DFROM and PDFROM were greater in both stretching groups after the 3-week interventions. However, ADFROM, PDFROM, and the lunge angle were greater in the SSS group than in the static-stretching group (P < .05).\n\n\nCONCLUSIONS\nAnkle SSS is recommended to improve ADFROM, PDFROM, and the lunge angle in individuals with limited DFROM.", "title": "" }, { "docid": "d15072fd8776d17e8a3b8b89af5fed08", "text": "PsV: psoriasis vulgaris INTRODUCTION Pityriasis amiantacea is a rare clinical condition characterized by masses of waxy and sticky scales that adhere to the scalp and tenaciously attach to hair bundles. Pityriasis amiantacea can be associated with psoriasis vulgaris (PsV).We examined a patient with pityriasis amiantacea caused by PsV who also had keratotic horns on the scalp, histopathologically fibrokeratomas. To the best of our knowledge, this is the first case of scalp fibrokeratoma stimulated by pityriasis amiantacea and PsV.", "title": "" }, { "docid": "61cd88d56bcae85c12dde4c2920af2ec", "text": "“Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St” vs. “Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St after Flinders Street Station, a yellow building with a green dome.” T1: <Flinders Street Station, front, Federation Square> T2: <Flinders Street Station, color, yellow> T3: <Flinders Street Station, has, green dome> Sent: Flinders Street Station is a yellow building with a green dome roof located in front of Federation Square", "title": "" }, { "docid": "a163d22ae7ef1e775e92f95476c6711e", "text": "With fast development and wide applications of next-generation sequencing (NGS) technologies, genomic sequence information is within reach to aid the achievement of goals to decode life mysteries, make better crops, detect pathogens, and improve life qualities. NGS systems are typically represented by SOLiD/Ion Torrent PGM from Life Sciences, Genome Analyzer/HiSeq 2000/MiSeq from Illumina, and GS FLX Titanium/GS Junior from Roche. Beijing Genomics Institute (BGI), which possesses the world's biggest sequencing capacity, has multiple NGS systems including 137 HiSeq 2000, 27 SOLiD, one Ion Torrent PGM, one MiSeq, and one 454 sequencer. We have accumulated extensive experience in sample handling, sequencing, and bioinformatics analysis. In this paper, technologies of these systems are reviewed, and first-hand data from extensive experience is summarized and analyzed to discuss the advantages and specifics associated with each sequencing system. At last, applications of NGS are summarized.", "title": "" }, { "docid": "a0547eae9a2186d4c6f1b8307317f061", "text": "Leadership scholars have called for additional research on leadership skill requirements and how those requirements vary by organizational level. In this study, leadership skill requirements are conceptualized as being layered (strata) and segmented (plex), and are thus described using a strataplex. Based on previous conceptualizations, this study proposes a model made up of four categories of leadership skill requirements: Cognitive skills, Interpersonal skills, Business skills, and Strategic skills. The model is then tested in a sample of approximately 1000 junior, midlevel, and senior managers, comprising a full career track in the organization. Findings support the “plex” element of the model through the emergence of four leadership skill requirement categories. Findings also support the “strata” portion of the model in that different categories of leadership skill requirements emerge at different organizational levels, and that jobs at higher levels of the organization require higher levels of all leadership skills. In addition, although certain Cognitive skill requirements are important across organizational levels, certain Strategic skill requirements only fully emerge at the highest levels in the organization. Thus a strataplex proved to be a valuable tool for conceptualizing leadership skill requirements across organizational levels. © 2007 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
662c32e9c28e63f32265a74486cab912
Learning Paraphrase Identification with Structural Alignment
[ { "docid": "f55f9174b70196e912c0cbe477ada467", "text": "This paper studies the use of structural representations for learning relations between pairs of short texts (e.g., sentences or paragraphs) of the kind: the second text answers to, or conveys exactly the same information of, or is implied by, the first text. Engineering effective features that can capture syntactic and semantic relations between the constituents composing the target text pairs is rather complex. Thus, we define syntactic and semantic structures representing the text pairs and then apply graph and tree kernels to them for automatically engineering features in Support Vector Machines. We carry out an extensive comparative analysis of stateof-the-art models for this type of relational learning. Our findings allow for achieving the highest accuracy in two different and important related tasks, i.e., Paraphrasing Identification and Textual Entailment Recognition.", "title": "" } ]
[ { "docid": "1c960375b6cdebfbd65ea0124dcdce0f", "text": "Parameterized unit tests extend the current industry practice of using closed unit tests defined as parameterless methods. Parameterized unit tests separate two concerns: 1) They specify the external behavior of the involved methods for all test arguments. 2) Test cases can be re-obtained as traditional closed unit tests by instantiating the parameterized unit tests. Symbolic execution and constraint solving can be used to automatically choose a minimal set of inputs that exercise a parameterized unit test with respect to possible code paths of the implementation. In addition, parameterized unit tests can be used as symbolic summaries which allows symbolic execution to scale for arbitrary abstraction levels. We have developed a prototype tool which computes test cases from parameterized unit tests. We report on its first use testing parts of the .NET base class library.", "title": "" }, { "docid": "c5081f86c4a173a40175e65b05d9effb", "text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.", "title": "" }, { "docid": "088011257e741b8d08a3b44978134830", "text": "This paper deals with the kinematic and dynamic analyses of the Orthoglide 5-axis, a five-degree-of-freedom manipulator. It is derived from two manipulators: i) the Orthoglide 3-axis; a three dof translational manipulator and ii) the Agile eye; a parallel spherical wrist. First, the kinematic and dynamic models of the Orthoglide 5-axis are developed. The geometric and inertial parameters of the manipulator are determined by means of a CAD software. Then, the required motors performances are evaluated for some test trajectories. Finally, the motors are selected in the catalogue from the previous results.", "title": "" }, { "docid": "9b8ba583adc6df6e02573620587be68a", "text": "BACKGROUND\nTraditional one-session exposure therapy (OST) in which a patient is gradually exposed to feared stimuli for up to 3 h in a one-session format has been found effective for the treatment of specific phobias. However, many individuals with specific phobia are reluctant to seek help, and access to care is lacking due to logistic challenges of accessing, collecting, storing, and/or maintaining stimuli. Virtual reality (VR) exposure therapy may improve upon existing techniques by facilitating access, decreasing cost, and increasing acceptability and effectiveness. The aim of this study is to compare traditional OST with in vivo spiders and a human therapist with a newly developed single-session gamified VR exposure therapy application with modern VR hardware, virtual spiders, and a virtual therapist.\n\n\nMETHODS/DESIGN\nParticipants with specific phobia to spiders (N = 100) will be recruited from the general public, screened, and randomized to either VR exposure therapy (n = 50) or traditional OST (n = 50). A behavioral approach test using in vivo spiders will serve as the primary outcome measure. Secondary outcome measures will include spider phobia questionnaires and self-reported anxiety, depression, and quality of life. Outcomes will be assessed using a non-inferiority design at baseline and at 1, 12, and 52 weeks after treatment.\n\n\nDISCUSSION\nVR exposure therapy has previously been evaluated as a treatment for specific phobias, but there has been a lack of high-quality randomized controlled trials. A new generation of modern, consumer-ready VR devices is being released that are advancing existing technology and have the potential to improve clinical availability and treatment effectiveness. The VR medium is also particularly suitable for taking advantage of recent phobia treatment research emphasizing engagement and new learning, as opposed to physiological habituation. This study compares a market-ready, gamified VR spider phobia exposure application, delivered using consumer VR hardware, with the current gold standard treatment. Implications are discussed.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier NCT02533310. Registered on 25 August 2015.", "title": "" }, { "docid": "e08c2c82730900fea60f6a3c81300430", "text": "The Internet of Things (IoT) is inter communication of embedded devices using networking technologies. The IoT will be one of the important trends in future, can affect the networking, business and communication. In this paper, proposing a remote sensing parameter of the human body which consists of pulse and temperature. The parameters that are used for sensing and monitoring will send the data through wireless sensors. Adding a web based observing helps to keep track of the regular health status of a patient. The sensing data will be continuously collected in a database and will be used to inform patient to any unseen problems to undergo possible diagnosis. Experimental results prove the proposed system is user friendly, reliable, economical.", "title": "" }, { "docid": "79cec2bfe95ae81b6dedf5c693f2acf0", "text": "Impedance of blood relatively affected by blood-glucose concentration. Blood electrical impedance value is varied with the content of blood glucose in a human body. This characteristic between glucose and electrical impedance has been proven by using four electrode method's measurement. The bioelectrical voltage output shows a difference between fasting and non-fasting blood glucose measured by using designed four tin lead alloy electrode. 10 test subjects ages between 20-25 years old are UniMAP student has been participated in this experiment and measurement of blood glucose using current clinical measurement and designed device is obtained. Preliminary study using the developed device, has shown that glucose value in the range of 4-5mol/Liter having the range of 0.500V to -1.800V during fasting, and 0.100V or less during normal glucose condition, 5 to 11 mol/liter. On the other hand, It also shows that prediction of blood glucose using this design device could achieve relevant for measurement accuracy compared to gold standard measurement, the hand prick invasive measurement. This early result has support that there is an ample scope in blood electrical study for the non-invasive blood glucose measurement.", "title": "" }, { "docid": "1f677c07ba42617ac590e6e0a5cdfeab", "text": "Network Functions Virtualization (NFV) is an emerging initiative to overcome increasing operational and capital costs faced by network operators due to the need to physically locate network functions in specific hardware appliances. In NFV, standard IT virtualization evolves to consolidate network functions onto high volume servers, switches and storage that can be located anywhere in the network. Services are built by chaining a set of Virtual Network Functions (VNFs) deployed on commodity hardware. The implementation of NFV leads to the challenge: How several network services (VNF chains) are optimally orchestrated and allocated on the substrate network infrastructure? In this paper, we address this problem and propose CoordVNF, a heuristic method to coordinate the composition of VNF chains and their embedding into the substrate network. CoordVNF aims to minimize bandwidth utilization while computing results within reasonable runtime.", "title": "" }, { "docid": "fce925493fc9f7cbbe4c202e5e625605", "text": "Topic models are a useful and ubiquitous tool for understanding large corpora. However, topic models are not perfect, and for many users in computational social science, digital humanities, and information studies—who are not machine learning experts—existing models and frameworks are often a “take it or leave it” proposition. This paper presents a mechanism for giving users a voice by encoding users’ feedback to topic models as correlations between words into a topic model. This framework, interactive topic modeling (itm), allows untrained users to encode their feedback easily and iteratively into the topic models. Because latency in interactive systems is crucial, we develop more efficient inference algorithms for tree-based topic models. We validate the framework both with simulated and real users.", "title": "" }, { "docid": "a7284bfc38d5925cb62f04c8f6dcaae2", "text": "The brain's electrical signals enable people without muscle control to physically interact with the world.", "title": "" }, { "docid": "d1668387f63428a25e3e63155c47de38", "text": "In this paper, a modified slot bow-tie antenna fed by a coplanar waveguide (CPW) is investigated for wideband operation. It is designed to work on a thin substrate (h = 0.787 mm) with a low dielectric constant (epsivr = 2.2) operating at two frequencies (1.8 and 2.4 GHz). Two metal stubs are introduced in the middle of the bow-tie slot in order to achieve a wider impedance bandwidth over the conventional CPW-fed bow-tie slot antenna. The antenna is fabricated and tested in the antenna measurement facility of UCL and experimental far-field results are presented showing low cross-polarisation levels. The bow-tie slot antenna can obtain at least 55% bandwidth.", "title": "" }, { "docid": "e919e6657597d61e4986f29766f142c8", "text": "Object reconstruction from a single image - in the wild - is a problem where we can make progress and get meaningful results today. This is the main message of this paper, which introduces an automated pipeline with pixels as inputs and 3D surfaces of various rigid categories as outputs in images of realistic scenes. At the core of our approach are deformable 3D models that can be learned from 2D annotations available in existing object detection datasets, that can be driven by noisy automatic object segmentations and which we complement with a bottom-up module for recovering high-frequency shape details. We perform a comprehensive quantitative analysis and ablation study of our approach using the recently introduced PASCAL 3D+ dataset and show very encouraging automatic reconstructions on PASCAL VOC.", "title": "" }, { "docid": "62166980f94bba5e75c9c6ad4a4348f1", "text": "In this paper the design and the implementation of a linear, non-uniform antenna array for a 77-GHz MIMO FMCW system that allows for the estimation of both the distance and the angular position of a target are presented. The goal is to achieve a good trade-off between the main beam width and the side lobe level. The non-uniform spacing in addition with the MIMO principle offers a superior performance compared to a classical uniform half-wavelength antenna array with an equal number of elements. However the design becomes more complicated and can not be tackled using analytical methods. Starting with elementary array factor considerations the design is approached using brute force, stepwise brute force, and particle swarm optimization. The particle swarm optimized array was also implemented. Simulation results and measurements are presented and discussed.", "title": "" }, { "docid": "4a94fb7432d172d5c1ce1e5429cc38b3", "text": "OBJECTIVE\nAssociations between eminent creativity and bipolar disorders have been reported, but there are few data relating non-eminent creativity to bipolar disorders in clinical samples. We assessed non-eminent creativity in euthymic bipolar (BP) and unipolar major depressive disorder (MDD) patients, creative discipline controls (CC), and healthy controls (HC).\n\n\nMETHODS\n49 BP, 25 MDD, 32 CC, and 47 HC (all euthymic) completed four creativity measures yielding six parameters: the Barron-Welsh Art Scale (BWAS-Total, and two subscales, BWAS-Dislike and BWAS-Like), the Adjective Check List Creative Personality Scale (ACL-CPS), and the Torrance Tests of Creative Thinking--Figural (TTCT-F) and Verbal (TTCT-V) versions. Mean scores on these instruments were compared across groups.\n\n\nRESULTS\nBP and CC (but not MDD) compared to HC scored significantly higher on BWAS-Total (45% and 48% higher, respectively) and BWAS-Dislike (90% and 88% higher, respectively), but not on BWAS-Like. CC compared to MDD scored significantly higher (12% higher) on TTCT-F. For all other comparisons, creativity scores did not differ significantly between groups.\n\n\nCONCLUSIONS\nWe found BP and CC (but not MDD) had similarly enhanced creativity on the BWAS-Total (driven by an increase on the BWAS-Dislike) compared to HC. Further studies are needed to determine the mechanisms of enhanced creativity and how it relates to clinical (e.g. temperament, mood, and medication status) and preclinical (e.g. visual and affective processing substrates) parameters.", "title": "" }, { "docid": "c450da231d3c3ec8410fe621f4ced54a", "text": "Distant supervision is a widely applied approach to automatic training of relation extraction systems and has the advantage that it can generate large amounts of labelled data with minimal effort. However, this data may contain errors and consequently systems trained using distant supervision tend not to perform as well as those based on manually labelled data. This work proposes a novel method for detecting potential false negative training examples using a knowledge inference method. Results show that our approach improves the performance of relation extraction systems trained using distantly supervised data.", "title": "" }, { "docid": "02322377d048f2469928a71290cf1566", "text": "In order to interact with human environments, humanoid robots require safe and compliant control which can be achieved through force-controlled joints. In this paper, full body step recovery control for robots with force-controlled joints is achieved by adding model-based feed-forward controls. Push Recovery Model Predictive Control (PR-MPC) is presented as a method for generating full-body step recovery motions after a large disturbance. Results are presented from experiments on the Sarcos Primus humanoid robot that uses hydraulic actuators instrumented with force feedback control.", "title": "" }, { "docid": "0757280353e6e1bd73b3d1cd11f6b031", "text": "OBJECTIVE\nTo investigate seasonal patterns in mood and behavior and estimate the prevalence of seasonal affective disorder (SAD) and subsyndromal seasonal affective disorder (S-SAD) in the Icelandic population.\n\n\nPARTICIPANTS AND SETTING\nA random sample generated from the Icelandic National Register, consisting of 1000 men and women aged 17 to 67 years from all parts of Iceland. It represents 6.4 per million of the Icelandic population in this age group.\n\n\nDESIGN\nThe Seasonal Pattern Assessment Questionnaire, an instrument for investigating mood and behavioral changes with the seasons, was mailed to a random sample of the Icelandic population. The data were compared with results obtained with similar methods in populations in the United States.\n\n\nMAIN OUTCOME MEASURES\nSeasonality score and prevalence rates of seasonal affective disorder and subsyndromal seasonal affective disorder.\n\n\nRESULTS\nThe prevalence of SAD and S-SAD were estimated at 3.8% and 7.5%, respectively, which is significantly lower than prevalence rates obtained with the same method on the east coast of the United States (chi 2 = 9.29 and 7.3; P < .01). The standardized rate ratios for Iceland compared with the United States were 0.49 and 0.63 for SAD and S-SAD, respectively. No case of summer SAD was found.\n\n\nCONCLUSIONS\nSeasonal affective disorder and S-SAD are more common in younger individuals and among women. The weight gained by patients during the winter does not seem to result in chronic obesity. The prevalence of SAD and S-SAD was lower in Iceland than on the East Coast of the United States, in spite of Iceland's more northern latitude. These results are unexpected since the prevalence of these disorders has been found to increase in more northern latitudes. The Icelandic population has remained remarkably isolated during the past 1000 years. It is conceivable that persons with a predisposition to SAD have been at a disadvantage and that there may have been a population selection toward increased tolerance of winter darkness.", "title": "" }, { "docid": "a114d20db34d29702b4f713c9569bc26", "text": "This paper describes a new approach towards detecting plagiarism and scientific documents that have been read but not cited. In contrast to existing approaches, which analyze documents' words but ignore their citations, this approach is based on citation analysis and allows duplicate and plagiarism detection even if a document has been paraphrased or translated, since the relative position of citations remains similar. Although this approach allows in many cases the detection of plagiarized work that could not be detected automatically with the traditional approaches, it should be considered as an extension rather than a substitute. Whereas the known text analysis methods can detect copied or, to a certain degree, modified passages, the proposed approach requires longer passages with at least two citations in order to create a digital fingerprint.", "title": "" }, { "docid": "88a2ed90fc39a4ad083aff9fabcf2bc6", "text": "This two-part article provides an overview of the global burden of atherothrombotic cardiovascular disease. Part I initially discusses the epidemiological transition which has resulted in a decrease in deaths in childhood due to infections, with a concomitant increase in cardiovascular and other chronic diseases; and then provides estimates of the burden of cardiovascular (CV) diseases with specific focus on the developing countries. Next, we summarize key information on risk factors for cardiovascular disease (CVD) and indicate that their importance may have been underestimated. Then, we describe overarching factors influencing variations in CVD by ethnicity and region and the influence of urbanization. Part II of this article describes the burden of CV disease by specific region or ethnic group, the risk factors of importance, and possible strategies for prevention.", "title": "" }, { "docid": "41b712d0d485c65a8dff32725c215f97", "text": "In this article, we present a novel, multi-user, virtual reality environment for the interactive, collaborative 3D analysis of large 3D scans and the technical advancements that were necessary to build it: a multi-view rendering system for large 3D point clouds, a suitable display infrastructure, and a suite of collaborative 3D interaction techniques. The cultural heritage site of Valcamonica in Italy with its large collection of prehistoric rock-art served as an exemplary use case for evaluation. The results show that our output-sensitive level-of-detail rendering system is capable of visualizing a 3D dataset with an aggregate size of more than 14 billion points at interactive frame rates. The system design in this exemplar application results from close exchange with a small group of potential users: archaeologists with expertise in rockart. The system allows them to explore the prehistoric art and its spatial context with highly realistic appearance. A set of dedicated interaction techniques was developed to facilitate collaborative visual analysis. A multi-display workspace supports the immediate comparison of geographically distributed artifacts. An expert review of the final demonstrator confirmed the potential for added value in rock-art research and the usability of our collaborative interaction techniques.", "title": "" }, { "docid": "982ebb6c33a1675d3073896e3768212a", "text": "Morphometric analysis of nuclei play an essential role in cytological diagnostics. Cytological samples contain hundreds or thousands of nuclei that need to be examined for cancer. The process is tedious and time-consuming but can be automated. Unfortunately, segmentation of cytological samples is very challenging due to the complexity of cellular structures. To deal with this problem, we are proposing an approach, which combines convolutional neural network and ellipse fitting algorithm to segment nuclei in cytological images of breast cancer. Images are preprocessed by the colour deconvolution procedure to extract hematoxylin-stained objects (nuclei). Next, convolutional neural network is performing semantic segmentation of preprocessed image to extract nuclei silhouettes. To find the exact location of nuclei and to separate touching and overlapping nuclei, we approximate them using ellipses of various sizes and orientations. They are fitted using the Bayesian object recognition approach. The accuracy of the proposed approach is evaluated with the help of reference nuclei segmented manually. Tests carried out on breast cancer images have shown that the proposed method can accurately segment elliptic-shaped objects.", "title": "" } ]
scidocsrr
a8a3f3eaa05364532f503543e84da9b3
Table Extraction from Document Images using Fixed Point Model
[ { "docid": "93cec060a420f2ffc3e67eb532186f8e", "text": "This paper presents an efficient approach to identify tabular structures within either electronic or paper documents. The resulting T—Recs system takes word bounding box information as input, and outputs the corresponding logical text block units (e.g. the cells within a table environment). Starting with an arbitrary word as block seed the algorithm recursively expands this block to all words that interleave with their vertical (north and south) neighbors. Since even smallest gaps of table columns prevent their words from mutual interleaving, this initial segmentation is able to identify and isolate such columns. In order to deal with some inherent segmentation errors caused by isolated lines (e.g. headers), overhanging words, or cells spawning more than one column, a series of postprocessing steps is added. These steps benefit from a very simple distinction between type 1 and type 2 blocks: type 1 blocks are those of at most one word per line, all others are of type 2. This distinction allows the selective application of heuristics to each group of blocks. The conjoint decomposition of column blocks into subsets of table cells leads to the final block segmentation of a homogeneous abstraction level. These segments serve the final layout analysis which identifies table environments and cells that are stretching over several rows and/or columns.", "title": "" } ]
[ { "docid": "fe513114c9c78c546ae7018ff84f9cab", "text": "Three-dimensional geometric morphometric (3DGM) methods for placing landmarks on digitized bones have become increasingly sophisticated in the last 20 years, including greater degrees of automation. One aspect shared by all 3DGM methods is that the researcher must designate initial landmarks. Thus, researcher interpretations of homology and correspondence are required for and influence representations of shape. We present an algorithm allowing fully automatic placement of correspondence points on samples of 3D digital models representing bones of different individuals/species, which can then be input into standard 3DGM software and analyzed with dimension reduction techniques. We test this algorithm against several samples, primarily a dataset of 106 primate calcanei represented by 1,024 correspondence points per bone. Results of our automated analysis of these samples are compared to a published study using a traditional 3DGM approach with 27 landmarks on each bone. Data were analyzed with morphologika(2.5) and PAST. Our analyses returned strong correlations between principal component scores, similar variance partitioning among components, and similarities between the shape spaces generated by the automatic and traditional methods. While cluster analyses of both automatically generated and traditional datasets produced broadly similar patterns, there were also differences. Overall these results suggest to us that automatic quantifications can lead to shape spaces that are as meaningful as those based on observer landmarks, thereby presenting potential to save time in data collection, increase completeness of morphological quantification, eliminate observer error, and allow comparisons of shape diversity between different types of bones. We provide an R package for implementing this analysis.", "title": "" }, { "docid": "7e78dbc7ae4fd9a2adbf7778db634b33", "text": "Dynamic Proof of Storage (PoS) is a useful cryptographic primitive that enables a user to check the integrity of outsourced files and to efficiently update the files in a cloud server. Although researchers have proposed many dynamic PoS schemes in singleuser environments, the problem in multi-user environments has not been investigated sufficiently. A practical multi-user cloud storage system needs the secure client-side cross-user deduplication technique, which allows a user to skip the uploading process and obtain the ownership of the files immediately, when other owners of the same files have uploaded them to the cloud server. To the best of our knowledge, none of the existing dynamic PoSs can support this technique. In this paper, we introduce the concept of deduplicatable dynamic proof of storage and propose an efficient construction called DeyPoS, to achieve dynamic PoS and secure cross-user deduplication, simultaneously. Considering the challenges of structure diversity and private tag generation, we exploit a novel tool called Homomorphic Authenticated Tree (HAT). We prove the security of our construction, and the theoretical analysis and experimental results show that our construction is efficient in practice.", "title": "" }, { "docid": "1ceab925041160f17163940360354c55", "text": "A complete reconstruction of D.H. Lehmer’s ENIAC set-up for computing the exponents of p modulo 2 is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations).", "title": "" }, { "docid": "0f269444f8326c42171943c557334657", "text": "This paper is about opening and closing an unknown drawer using an aerial manipulator. To accommodate practical applications, it is assumed that the direction of motion and mechanical properties of the drawer are not given beforehand. A multirotor combined with a robotic arm is used for the manipulation task. Typical drawers are allowed to move in only one direction, which constrains the motion of the aerial manipulator while operating a drawer. To analyze this interaction, the dynamic characteristics of the aerial manipulator are modeled. Also, configuration of the aerial manipulator for exerting the desired force to a drawer is presented. To handle the uncertainties associated with the mechanism of a drawer, strategies exploiting velocity of the end effector are employed. The proposed approach is validated with experiments including opening and closing a common drawer, which is detected by a camera mounted in the palm of the end effector.", "title": "" }, { "docid": "c573baa73f417485e3afb31d2f6fc912", "text": "The Liver is a largest gland in the body. Distinct diseases affected on the liver. Liver diseases is one of the most serious health problem worldwide. For detecting the liver diseases the Segmentation Technique is essential. Segmentation is used for the classification of liver diseases. The liver diseases are focal or diffused is easily understood by the physician using segmentation. We use CT scan image for segmentation but the noise is present in the image. Therefore preprocessing is applied on the image for the removal of noise. In this paper, Watershed Transform segmentation Algorithm is used because it produce complete division of images in separate region even if contrast is poor. Therefore this method could be achieved 92.1% accuracy.", "title": "" }, { "docid": "83f17f43e7b2e21d4aa3baf54270c76f", "text": "Artificial intelligence (AI) is an important technology that supports daily social life and economic activities. It contributes greatly to the sustainable growth of Japan's economy and solves various social problems. In recent years, AI has attracted attention as a key for growth in developed countries such as Europe and the United States and developing countries such as China and India. The attention has been focused mainly on developing new artificial intelligence information communication technology (ICT) and robot technology (RT). Although recently developed AI technology certainly excels in extracting certain patterns, there are many limitations. Most ICT models are overly dependent on big data, lack a self-idea function, and are complicated. In this paper, rather than merely developing nextgeneration artificial intelligence technology, we aim to develop a new concept of general-purpose intelligence cognition technology called “Beyond AI”. Specifically, we plan to develop an intelligent learning model called “Brain Intelligence (BI)” that generates new ideas about events without having experienced them by using artificial life with an imagine function. We will also conduct demonstrations of the developed BI intelligence learning model on automatic driving, precision medical care, and industrial robots.", "title": "" }, { "docid": "24a78bcc7c60ab436f6fd32bdc0d7661", "text": "Passing the Turing Test is not a sensible goal for Artificial Intelligence. Adherence to Turing's vision from 1950 is now actively harmful to our field. We review problems with Turing's idea, and suggest that, ironically, the very cognitive science that he tried to create must reject his research goal.", "title": "" }, { "docid": "6cb46b57b657a90fb5b4b91504cdfd8f", "text": "One of the themes of Emotion and Decision-Making Explained (Rolls, 2014c) is that there are multiple routes to emotionrelated responses, with some illustrated in Fig. 1. Brain systems involved in decoding stimuli in terms of whether they are instrumental reinforcers so that goal directed actions may be performed to obtain or avoid the stimuli are emphasized as being important for emotional states, for an intervening state may be needed to bridge the time gap between the decoding of a goal-directed stimulus, and the actions that may need to be set into train and directed to obtain or avoid the emotionrelated stimulus. In contrast, when unconditioned or classically conditioned responses such as autonomic responses, freezing, turning away etc. are required, there is no need for intervening states such as emotional states. These points are covered in Chapters 2e4 and 10 of the book. Ono and Nishijo (2014) raise the issue of the extent to which subcortical pathways are involved in the elicitation of some of these emotion-related responses. They describe interesting research that pulvinar neurons in macaques may respond to snakes, and may provide a route that does not require cortical processing for some probably innately specified visual stimuli to produce responses. With respect to Fig. 1, the pathway is that some of the inputs labeled as primary reinforcers may reach brain regions including the amygdala by a subcortical route. LeDoux (2012) provides evidence in the same direction, in his case involving a ‘low road’ for auditory stimuli such as tones (which do not required cortical processing) to reach, via a subcortical pathway, the amygdala, where classically conditioned e.g., freezing and autonomic responses may be learned. Consistently, there is evidence (Chapter 4) that humans with damage to the primary visual cortex who describe themselves as blind do nevertheless show some responses to stimuli such as a face expression (de Gelder, Vroomen, Pourtois, & Weiskrantz, 1999; Tamietto et al., 2009; Tamietto & de Gelder, 2010). I agree that the elicitation of unconditioned and conditioned responses to these particular types of stimuli (LeDoux, 2014) is of interest (Rolls, 2014a). However, in Emotion and Decision-Making Explained, I emphasize that there aremassive cortical inputs to structures involved in emotion such as the amygdala and orbitofrontal cortex, and that neurons in both structures can have viewinvariant responses to visual stimuli including faces which specify face identity, and can have responses that are selective for particular emotional expressions (Leonard, Rolls, Wilson, & Baylis, 1985; Rolls, 1984, 2007, 2011, 2012; Rolls, Critchley, Browning, & Inoue, 2006) which reflect the neuronal responses found in the temporal cortical and related visual areas, as we discovered (Perrett, Rolls, & Caan, 1982; Rolls, 2007, 2008a, 2011, 2012; Sanghera, Rolls, & Roper-Hall, 1979). View invariant representations are important for", "title": "" }, { "docid": "78d1dafdd3c33c3d462185b1a96d585e", "text": "Online games have exploded in popularity, but for many researchers access to players has been difficult. The study reported here is the first to collect a combination of survey and behavioral data with the cooperation of a major virtual world operator. In the current study, 7,000 players of the massively multiplayer online game (MMO) EverQuest 2 were surveyed about their offline characteristics, their motivations and their physical and mental health. These self-report data were then combined with data on participants’ actual in-game play behaviors, as collected by the game operator. Most of the results defy common stereotypes in surprising and interesting ways and have implications for communication theory and for future investigations of games.", "title": "" }, { "docid": "9d0b7f84d0d326694121a8ba7a3094b4", "text": "Passive sensing of human hand and limb motion is important for a wide range of applications from human-computer interaction to athletic performance measurement. High degree of freedom articulated mechanisms like the human hand are di cult to track because of their large state space and complex image appearance. This article describes a model-based hand tracking system, called DigitEyes, that can recover the state of a 27 DOF hand model from ordinary gray scale images at speeds of up to 10 Hz.", "title": "" }, { "docid": "9423718cce01b45c688066f322b2c2aa", "text": "Currently there are many techniques based on information technology and communication aimed at assessing the performance of students. Data mining applied in the educational field (educational data mining) is one of the most popular techniques that are used to provide feedback with regard to the teaching-learning process. In recent years there have been a large number of open source applications in the area of educational data mining. These tools have facilitated the implementation of complex algorithms for identifying hidden patterns of information in academic databases. The main objective of this paper is to compare the technical features of three open source tools (RapidMiner, Knime and Weka) as used in educational data mining. These features have been compared in a practical case study on the academic records of three engineering programs in an Ecuadorian university. This comparison has allowed us to determine which tool is most effective in terms of predicting student performance.", "title": "" }, { "docid": "899e96eacd2c73730c157056c56eea25", "text": "Hyaluronic acid (HA), a macropolysaccharidic component of the extracellular matrix, is common to most species and it is found in many sites of the human body, including skin and soft tissue. Not only does HA play a variety of roles in physiologic and in pathologic events, but it also has been extensively employed in cosmetic and skin-care products as drug delivery agent or for several biomedical applications. The most important limitations of HA are due to its short half-life and quick degradation in vivo and its consequently poor bioavailability. In the aim to overcome these difficulties, HA is generally subjected to several chemical changes. In this paper we obtained an acetylated form of HA with increased bioavailability with respect to the HA free form. Furthermore, an improved radical scavenging and anti-inflammatory activity has been evidenced, respectively, on ABTS radical cation and murine monocyte/macrophage cell lines (J774.A1).", "title": "" }, { "docid": "ec4dae5e2aa5a5ef67944d82a6324c9d", "text": "Parallel collection processing based on second-order functions such as map and reduce has been widely adopted for scalable data analysis. Initially popularized by Google, over the past decade this programming paradigm has found its way in the core APIs of parallel dataflow engines such as Hadoop's MapReduce, Spark's RDDs, and Flink's DataSets. We review programming patterns typical of these APIs and discuss how they relate to the underlying parallel execution model. We argue that fixing the abstraction leaks exposed by these patterns will reduce the cost of data analysis due to improved programmer productivity. To achieve that, we first revisit the algebraic foundations of parallel collection processing. Based on that, we propose a simplified API that (i) provides proper support for nested collection processing and (ii) alleviates the need of certain second-order primitives through comprehensions -- a declarative syntax akin to SQL. Finally, we present a metaprogramming pipeline that performs algebraic rewrites and physical optimizations which allow us to target parallel dataflow engines like Spark and Flink with competitive performance.", "title": "" }, { "docid": "bb28519ca1161bafb9b3812b1fd66ed1", "text": "Considering the variations of inertia in real applications, an adaptive control scheme for the permanent-magnet synchronous motor speed-regulation system is proposed in this paper. First, a composite control method, i.e., the extended-state-observer (ESO)-based control method, is employed to ensure the performance of the closed-loop system. The ESO can estimate both the states and the disturbances simultaneously so that the composite speed controller can have a corresponding part to compensate for the disturbances. Then, considering the case of variations of load inertia, an adaptive control scheme is developed by analyzing the control performance relationship between the feedforward compensation gain and the system inertia. By using inertia identification techniques, a fuzzy-inferencer-based supervisor is designed to automatically tune the feedforward compensation gain according to the identified inertia. Simulation and experimental results both show that the proposed method achieves a better speed response in the presence of inertia variations.", "title": "" }, { "docid": "6819116197ba7a081922ef33175c8882", "text": "The recent advanced face recognition systems were built on large Deep Neural Networks (DNNs) or their ensembles, which have millions of parameters. However, the expensive computation of DNNs make their deployment difficult on mobile and embedded devices. This work addresses model compression for face recognition, where the learned knowledge of a large teacher network or its ensemble is utilized as supervision to train a compact student network. Unlike previous works that represent the knowledge by the soften label probabilities, which are difficult to fit, we represent the knowledge by using the neurons at the higher hidden layer, which preserve as much information as the label probabilities, but are more compact. By leveraging the essential characteristics (domain knowledge) of the learned face representation, a neuron selection method is proposed to choose neurons that are most relevant to face recognition. Using the selected neurons as supervision to mimic the single networks of DeepID2+ and DeepID3, which are the state-of-the-art face recognition systems, a compact student with simple network structure achieves better verification accuracy on LFW than its teachers, respectively. When using an ensemble of DeepID2+ as teacher, a mimicked student is able to outperform it and achieves 51.6× compression ratio and 90× speed-up in inference, making this cumbersome model applicable on portable devices. Introduction As the emergence of big training data, Deep Neural Networks (DNNs) recently attained great breakthroughs in face recognition [23, 20, 21, 22, 19, 15, 29, 30, 28] and become applicable in many commercial platforms such as social networks, e-commerce, and search engines. To absorb massive supervision from big training data, existing works typically trained a large DNN or a DNN ensemble, where each DNN consists of millions of parameters. Nevertheless, as face recognition shifts toward mobile and embedded devices, large DNNs are computationally expensive, which prevents them from being deployed to these devices. It motivates research of using a small network to fit very large training ∗indicates co-first authors who contributed equally. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. data. This work addresses model compression of DNNs for face recognition, by incorporating domain knowledge of learning face representation. There have been several attempts [1, 7, 18] in literature to compress DNNs, so as to make their deployments easier, where a single network (i.e. a student) was trained by using the knowledge learned with a large DNN or a DNN ensemble (i.e. a teacher) as supervision. This knowledge can be simply represented as the probabilities of label predictions by employing the softmax function [10]. Compared with the original 1-of-K hard labels, the label probabilities encode richer relative similarities among training samples and can train a DNN more effectively. However, this representation loses much information because most of the probabilities are close to zeros after squashed by softmax. To overcome this problem, Ba and Caruana [1] represented the learned knowledge by using the logits, which are the values before softmax activation but zero-meaned, revealing the relationship between labels as well as the similarities among samples in the logit space. However, as these unconstrained values (e.g. the large negatives) may contain noisy information that overfits the training data, using them as supervision limits the generalization ability of the student. Recently, Hinton et al. [7] showed that both the label probabilities and zero-meaned logits are two extreme outputs of the softmax functions, where the temperature becomes one and positive infinity, respectively. To remove target noise, they empirically searched for a suitable temperature in the softmax function, until it produced soften probabilities that were able to disclose the similarity structure of data. As these soften target labels comprise much valuable information, a single student trained on them is able to mimic the performance of a cumbersome network ensemble. Despite the successes of [7], our empirical results show that training on soft targets is difficult to converge when compressing DNNs for face recognition. Previous studies [23, 24, 20, 19] have shown that the face representation learned from classifying larger amount of identities in the training data (e.g. 250 thousand in [24]) may have better generalization capacity. In face recognition, it seems difficult to fit soft targets with high dimensionality, which makes convergence slow. In this work, we show that instead of using soft targets in the output layer, the knowledge of the teacher can also be obtained from the neurons in the top hidden layer, which preserve as much information as the soft targets (as the soft targets are predicted from these neurons) but are more compact, e.g. 512 versus 12,994 according to the net structure in [21]. As these neurons may contain noise or information not relevant to face recognition, they are further selected according to the usefulness of knowledge captured by them. In particular, the selection is motivated by three original observations (domain knowledge) of face representation disclosed in this work, which are naturally generalized to all DNNs trained by distinguishing massive identities, such as [19, 23, 24, 22]. (1) Deeply learned face representation by the face recognition task is a distributed representation [6] over face attributes, including the identity-related attributes (IA), such as gender, race, and shapes of facial components, as well as the identity non-related attributes (NA), such as expression, lighting, and photo quality. This observation implies that each attribute concept is explained by having some neurons being activated while each neuron is involved in representing more than one attribute, although attribute labels are not provided during training. (2) However, a certain amount of neurons are selective to NA or both NA and IA, implying that the distributed representation is neither invariant nor completely factorized, because attributes in NA are variations that should be removed in face recognition, whereas these two factors (NA and IA) are presented and coupled in some neurons. (3) Furthermore, a small amount of neurons are inhibitive to all attributes and server as noise. With these observations, we cast neuron selection as inference on a fully-connected graph, where each node represents attribute-selectiveness of neuron and each edge represents correlation between neurons. An efficient mean field algorithm [9] enables us to select neurons that are more selective or discriminative to IA, but less correlated with each other. As a result, the features of the selected neurons are able to maintain the inter-personal discriminativeness (i.e. distributed and factorized to explain IA), while reducing intra-personal variations (i.e. invariant to NA). We employ the features after neuron selection as regression targets to train the student. To evaluate neuron selection, we employ DeepID2+ [21] as a teacher (T1), which achieved state-of-the-art performance on LFW benchmark [8]. This work is chosen as an example because it successfully incorporated multiple complex components for face recognition, such as local convolution [12], ranking loss function [19], deeply supervised learning [13], and model ensemble [17]. The effectiveness of all these components in face recognition have been validated by many existing works [19, 23, 24, 27]. Evaluating neuron selection on it demonstrates its capacity and generalization ability on mimicking functions induced by different learning strategies in face recognition. With neuron selection, a student with simple network structure is able to outperform a single network of T1 or its ensemble. Interestingly, this simple student generalizes well to mimic a deeper teacher (T2), DeepID3 [22], which is a recent extension of DeepID2+. Although there are other advanced methods [24, 19] in face recognition, [21, 22] are more suitable to be taken as baselines. They outperformed [24] and achieved comparable result with [19] on LFW with much smaller size of training data and identities, i.e. 290K images [21] compares to 7.5M images [24] and 200M images [19]. We cannot compare with [24, 19] because their data are unavailable. Three main contributions of this work are summarized as below. (1) We demonstrate that more compact supervision converge more efficiently, when compressing DNNs for face recognition. Soft targets are difficult to fit because of high dimensionality. Instead, neurons in the top hidden layers are proper supervision, as they capture as much information as soft targets but more compact. (2) Three valuable observations are disclosed from the deeply learned face representation, identifying the usefulness of knowledge captured in these neurons. These observations are naturally generalized to all DNNs trained on face images. (3) With these observations, an efficient neuron selection method is proposed for model compression and its effectiveness is validated on T1 and T2. Face Model Compression Training Student via Neuron Selection The merit behind our method is to select informative neurons in the top hidden layer of a teacher, and adopt the features (responses) of the chosen neurons as supervision to train a student, mimicking the teacher’s feature space. We formulate the objective function of model compression as a regression problem given a training set D = {Ii, fi}i=1,", "title": "" }, { "docid": "780fa6ac90a33818ae9a564f73ef96e0", "text": "The e-commerce literature has rarely addressed the measurement of customer perceptions of website service quality in digital marketing environments. It is argued that the current SERVQUAL and IS-SERVQUAL instruments need to be refined and validated to fit the digital marketing environment, as they are targeted primarily towards either traditional retailing or information systems contexts. This article validates and refines a comprehensive model and instrument for measuring customer-perceived service quality of websites that market digital products and services. After a discussion of the conceptualization and operationalization of the service quality construct, the procedure used in modifying items, collecting data, and validating a multiple-item scale is described. Subsequently, evidence of reliability and validity on the basis of analyzing data from a quota sample of 260 adult respondents is presented. Implications for practice and research are then explored. Finally, this paper concludes by discussing limitations that could be addressed in future studies. The final EC-SERVQUAL instrument with good reliability and validity will be essential to the development and testing of e-business theories, and provide researchers with a common framework for explaining, justifying, and comparing differences across results.", "title": "" }, { "docid": "54b4726650b3afcddafb120ff99c9951", "text": "Online harassment has been a problem to a greater or lesser extent since the early days of the internet. Previous work has applied anti-spam techniques like machine-learning based text classification (Reynolds, 2011) to detecting harassing messages. However, existing public datasets are limited in size, with labels of varying quality. The #HackHarassment initiative (an alliance of 1 tech companies and NGOs devoted to fighting bullying on the internet) has begun to address this issue by creating a new dataset superior to its predecssors in terms of both size and quality. As we (#HackHarassment) complete further rounds of labelling, later iterations of this dataset will increase the available samples by at least an order of magnitude, enabling corresponding improvements in the quality of machine learning models for harassment detection. In this paper, we introduce the first models built on the #HackHarassment dataset v1.0 (a new open dataset, which we are delighted to share with any interested researcherss) as a benchmark for future research.", "title": "" }, { "docid": "0ae5df7af64f0069d691922d391f3c60", "text": "With the realization that more research is needed to explore external factors (e.g., pedagogy, parental involvement in the context of K-12 learning) and internal factors (e.g., prior knowledge, motivation) underlying student-centered mobile learning, the present study conceptually and empirically explores how the theories and methodologies of self-regulated learning (SRL) can help us analyze and understand the processes of mobile learning. The empirical data collected from two elementary science classes in Singapore indicates that the analytical SRL model of mobile learning proposed in this study can illuminate the relationships between three aspects of mobile learning: students’ self-reports of psychological processes, patterns of online learning behavior in the mobile learning environment (MLE), and learning achievement. Statistical analyses produce three main findings. First, student motivation in this case can account for whether and to what degree the students can actively engage in mobile learning activities metacognitively, motivationally, and behaviorally. Second, the effect of students’ self-reported motivation on their learning achievement is mediated by their behavioral engagement in a pre-designed activity in the MLE. Third, students’ perception of parental autonomy support is not only associated with their motivation in school learning, but also associated with their actual behaviors in self-regulating their learning. ! 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e1fb80117a0925954b444360e227d680", "text": "Maize is one of the most important food and feed crops in Asia, and is a source of income for several million farmers. Despite impressive progress made in the last few decades through conventional breeding in the “Asia-7” (China, India, Indonesia, Nepal, Philippines, Thailand, and Vietnam), average maize yields remain low and the demand is expected to increasingly exceed the production in the coming years. Molecular marker-assisted breeding is accelerating yield gains in USA and elsewhere, and offers tremendous potential for enhancing the productivity and value of Asian maize germplasm. We discuss the importance of such efforts in meeting the growing demand for maize in Asia, and provide examples of the recent use of molecular markers with respect to (i) DNA fingerprinting and genetic diversity analysis of maize germplasm (inbreds and landraces/OPVs), (ii) QTL analysis of important biotic and abiotic stresses, and (iii) marker-assisted selection (MAS) for maize improvement. We also highlight the constraints faced by research institutions wishing to adopt the available and emerging molecular technologies, and conclude that innovative models for resource-pooling and intellectual-property-respecting partnerships will be required for enhancing the level and scope of molecular marker-assisted breeding for maize improvement in Asia. Scientists must ensure that the tools of molecular marker-assisted breeding are focused on developing commercially viable cultivars, improved to ameliorate the most important constraints to maize production in Asia.", "title": "" }, { "docid": "7a3c34357535b09507c541c98d6dc038", "text": "Fine-grained opinion analysis aims to extract aspect and opinion terms from each sentence for opinion summarization. Supervised learning methods have proven to be effective for this task. However, in many domains, the lack of labeled data hinders the learning of a precise extraction model. In this case, unsupervised domain adaptation methods are desired to transfer knowledge from the source domain to any unlabeled target domain. In this paper, we develop a novel recursive neural network that could reduce domain shift effectively in word level through syntactic relations. We treat these relations as invariant “pivot information” across domains to build structural correspondences and generate an auxiliary task to predict the relation between any two adjacent words in the dependency tree. In the end, we demonstrate state-ofthe-art results on three benchmark datasets.", "title": "" } ]
scidocsrr
9d027c9e6e5920aa27008f23e4e60bfa
Chitchat: Navigating tradeoffs in device-to-device context sharing
[ { "docid": "01490975c291a64b40484f6d37ea1c94", "text": "Context-aware systems offer entirely new opportunities for application developers and for end users by gathering context data and adapting systems’ behavior accordingly. Especially in combination with mobile devices such mechanisms are of great value and claim to increase usability tremendously. In this paper, we present a layered architectural framework for context-aware systems. Based on our suggested framework for analysis, we introduce various existing context-aware systems focusing on context-aware middleware and frameworks, which ease the development of context-aware applications. We discuss various approaches and analyze important aspects in context-aware computing on the basis of the presented systems.", "title": "" } ]
[ { "docid": "0dc3c4e628053e8f7c32c0074a2d1a59", "text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.", "title": "" }, { "docid": "842e7c5b825669855617133b0067efc9", "text": "This research proposes a robust method for disc localization and cup segmentation that incorporates masking to avoid misclassifying areas as well as forming the structure of the cup based on edge detection. Our method has been evaluated using two fundus image datasets, namely: D-I and D-II comprising of 60 and 38 images, respectively. The proposed method of disc localization achieves an average Fscore of 0.96 and average boundary distance of 7.7 for D-I, and 0.96 and 9.1, respectively, for D-II. The cup segmentation method attains an average Fscore of 0.88 and average boundary distance of 13.8 for D-I, and 0.85 and 18.0, respectively, for D-II. The estimation errors (mean ± standard deviation) of our method for the value of vertical cup-to-disc diameter ratio against the result of the boundary by the expert of DI and D-II have similar value, namely 0.04 ± 0.04. Overall, the result of ourmethod indicates its robustness for glaucoma evaluation. B Anindita Septiarini anindita.septiarini@gmail.com Agus Harjoko aharjoko@ugm.ac.id Reza Pulungan pulungan@ugm.ac.id Retno Ekantini rekantini@ugm.ac.id 1 Department of Computer Science and Electronics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 2 Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 3 Department of Computer Science, Mulawarman University, Samarinda 75123, Indonesia", "title": "" }, { "docid": "526406ca138d241c6d464fa192c7b0e8", "text": "BACKGROUND AND PURPOSE\nWe sought to determine knowledge at the time of symptom onset regarding the signs, symptoms, and risk factors of stroke in patients presenting to the emergency department with potential stroke.\n\n\nMETHODS\nPatients admitted from the emergency department with possible stroke were identified prospectively. A standardized, structured interview with open-ended questions was performed within 48 hours of symptom onset to assess patients' knowledge base concerning stroke signs, symptoms, and risk factors.\n\n\nRESULTS\nOf the 174 eligible patients, 163 patients were able to respond to the interview questions. Of these 163 patients, 39% (63) did not know a single sign or symptom of stroke. Unilateral weakness (26%) and numbness (22%) were the most frequently noted symptoms. Patients aged > or = 65 years were less likely to know a sign or symptom of stroke than those aged < 65 years (percentage not knowing a single sign or symptom, 47% versus 28%, P = .016). Similarly, 43% of patients did not know a single risk factor for stroke. The elderly were less likely to know a risk factor than their younger counterparts.\n\n\nCONCLUSIONS\nAlmost 40% of patients admitted with a possible stroke did not know the signs, symptoms, or risk factor of a stroke. Further public education is needed to increase awareness of the warning signs and risk factors of stroke.", "title": "" }, { "docid": "61b0616d960de54b3b6faae9e712f29b", "text": "In this paper we propose a novel method for depth image superresolution which combines recent advances in example based upsampling with variational superresolution based on a known blur kernel. Most traditional depth superresolution approaches try to use additional high resolution intensity images as guidance for superresolution. In our method we learn a dictionary of edge priors from an external database of high and low resolution examples. In a novel variational sparse coding approach this dictionary is used to infer strong edge priors. Additionally to the traditional sparse coding constraints the difference in the overlap of neighboring edge patches is minimized in our optimization. These edge priors are used in a novel variational superresolution as anisotropic guidance of the higher order regularization. Both the sparse coding and the variational superresolution of the depth are solved based on a primal-dual formulation. In an exhaustive numerical and visual evaluation we show that our method clearly outperforms existing approaches on multiple real and synthetic datasets.", "title": "" }, { "docid": "f942a0bcda6a9b3f6605cd6263ac0b5c", "text": "In nose surgery, carved or crushed cartilage used as a graft has some disadvantages, chiefly that it may be perceptible through the nasal skin after tissue resolution is complete. To overcome these problems and to obtain a smoother surface, the authors initiated the use of Surgicel-wrapped diced cartilage. This innovative technique has been used by the authors on 2365 patients over the past 10 years: in 165 patients with traumatic nasal deformity, in 350 patients with postrhinoplasty deformity, and in 1850 patients during primary rhinoplasty. The highlights of the surgical procedure include harvested cartilage (septal, alar, conchal, and sometimes costal) cut in pieces of 0.5 to 1 mm using a no. 11 blade. The fine-textured cartilage mass is then wrapped in one layer of Surgicel and moistened with an antibiotic (rifamycin). The graft is then molded into a cylindrical form and inserted under the dorsal nasal skin. In the lateral wall and tip of the nose, some overcorrection is performed depending on the type of deformity. When the mucosal stitching is complete, this graft can be externally molded, like plasticine, under the dorsal skin. In cases of mild-to-moderate nasal depression, septal and conchal cartilages are used in the same manner to augment the nasal dorsum with consistently effective and durable results. In cases with more severe defects of the nose, costal cartilage is necessary to correct both the length of the nose and the projection of the columella. In patients with recurrent deviation of the nasal bridge, this technique provided a simple solution to the problem. After overexcision of the dorsal part of deviated septal cartilage and insertion of Surgicel-wrapped diced cartilage, a straight nose was obtained in all patients with no recurrence (follow-up of 1 to 10 years). The technique also proved to be highly effective in primary rhinoplasties to camouflage bone irregularities after hump removal in patients with thin nasal skin and/or in cases when excessive hump removal was performed. As a complication, in six patients early postoperative swelling was more than usual. In 16 patients, overcorrection was persistent owing to fibrosis, and in 11 patients resorption was excessive beyond the expected amount. A histologic evaluation was possible in 16 patients, 3, 6, and 12 months postoperatively, by removing thin slices of excess cartilage from the dorsum of the nose during touch-up surgery. This graft showed a mosaic-type alignment of graft cartilage with fibrous tissue connection among the fragments. In conclusion, this type of graft is very easy to apply, because a plasticine-like material is obtained that can be molded with the fingers, giving a smooth surface with desirable form and long-lasting results in all cases. The favorable results obtained by this technique have led the authors to use Surgicel-wrapped diced cartilage routinely in all types of rhinoplasty.", "title": "" }, { "docid": "fae55cf048de769f7b57c3a02cc02f8e", "text": "Ranking fraud in the mobile App market refers to fraudulent or deceptive activities which have a purpose of bumping up the Apps in the popularity list. Indeed, it becomes more and more frequent for App developers to use shady means, such as inflating their Apps' sales or posting phony App ratings, to commit ranking fraud. While the importance of preventing ranking fraud has been widely recognized, there is limited understanding and research in this area. To this end, in this paper, we provide a holistic view of ranking fraud and propose a ranking fraud detection system for mobile Apps. Specifically, we first propose to accurately locate the ranking fraud by mining the active periods, namely leading sessions, of mobile Apps. Such leading sessions can be leveraged for detecting the local anomaly instead of globalanomaly of App rankings. Furthermore, we investigate three types of evidences, i.e., ranking based evidences, rating based evidences and review based evidences, by modeling Apps' ranking, rating and review behaviors through statistical hypotheses tests. In addition, we propose an optimization based aggregation method to integrate all the evidences for fraud detection. Finally, we evaluate the proposed system with real-world App data collected from the iOS App Store for a long time period. In the experiments, we validate the effectiveness of the proposed system, and show the scalability of the detection algorithm as well as some regularity of ranking fraud activities.", "title": "" }, { "docid": "e1095273f4d65e31ea53d068c3dee348", "text": "We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the /spl lscr//sub 1/-norm. A number of recent theoretical results on sparsifying properties of /spl lscr//sub 1/ penalties justify this choice. Explicitly enforcing the sparsity of the representation is motivated by a desire to obtain a sharp estimate of the spatial spectrum that exhibits super-resolution. We propose to use the singular value decomposition (SVD) of the data matrix to summarize multiple time or frequency samples. Our formulation leads to an optimization problem, which we solve efficiently in a second-order cone (SOC) programming framework by an interior point implementation. We propose a grid refinement method to mitigate the effects of limiting estimates to a grid of spatial locations and introduce an automatic selection criterion for the regularization parameter involved in our approach. We demonstrate the effectiveness of the method on simulated data by plots of spatial spectra and by comparing the estimator variance to the Crame/spl acute/r-Rao bound (CRB). We observe that our approach has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources, as well as not requiring an accurate initialization.", "title": "" }, { "docid": "06860bf1ede8dfe83d3a1b01fe4df835", "text": "The Internet and computer networks are exposed to an increasing number of security threats. With new types of attacks appearing continually, developing flexible and adaptive security oriented approaches is a severe challenge. In this context, anomaly-based network intrusion detection techniques are a valuable technology to protect target systems and networks against malicious activities. However, despite the variety of such methods described in the literature in recent years, security tools incorporating anomaly detection functionalities are just starting to appear, and several important problems remain to be solved. This paper begins with a review of the most well-known anomaly-based intrusion detection techniques. Then, available platforms, systems under development and research projects in the area are presented. Finally, we outline the main challenges to be dealt with for the wide scale deployment of anomaly-based intrusion detectors, with special emphasis on assessment issues. a 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f8fb643997b42e72f64f6b3eaca25c3a", "text": "The great success of deep learning shows that its technology contains profound truth, and understanding its internal mechanism not only has important implications for the development of its technology and effective application in various fields, but also provides meaningful insights into the understanding of human brain mechanism. At present, most of the theoretical research on deep learning is based on mathematics. This dissertation proposes that the neural network of deep learning is a physical system, examines deep learning from three different perspectives: microscopic, macroscopic, and physical world views, answers multiple theoretical puzzles in deep learning by using physics principles. For example, from the perspective of quantum mechanics and statistical physics, this dissertation presents the calculation methods for convolution calculation, pooling, normalization, and Restricted Boltzmann Machine, as well as the selection of cost functions, explains why deep learning must be deep, what characteristics are learned in deep learning, why Convolutional Neural Networks do not have to be trained layer by layer, and the limitations of deep learning, etc., and proposes the theoretical direction and basis for the further development of deep learning now and in the future. The brilliance of physics flashes in deep learning, we try to establish the deep learning technology based on the scientific theory of physics.", "title": "" }, { "docid": "298d67edd4095672c69f14598ba12ab6", "text": "Cryptocurrencies have emerged as important financial software systems. They rely on a secure distributed ledger data structure; mining is an integral part of such systems. Mining adds records of past transactions to the distributed ledger known as Blockchain, allowing users to reach secure, robust consensus for each transaction. Mining also introduces wealth in the form of new units of currency. Cryptocurrencies lack a central authority to mediate transactions because they were designed as peer-to-peer systems. They rely on miners to validate transactions. Cryptocurrencies require strong, secure mining algorithms. In this paper we survey and compare and contrast current mining techniques as used by major Cryptocurrencies. We evaluate the strengths, weaknesses, and possible threats to each mining strategy. Overall, a perspective on how Cryptocurrencies mine, where they have comparable performance and assurance, and where they have unique threats and strengths are outlined.", "title": "" }, { "docid": "7ce314babce8509724f05beb4c3e5cdd", "text": "This paper presents WikiCoref, an English corpus annotated for anaphoric relations, where all documents are from the English version of Wikipedia. Our annotation scheme follows the one of OntoNotes with a few disparities. We annotated each markable with coreference type, mention type and the equivalent Freebase topic. Since most similar annotation efforts concentrate on very specific types of written text, mainly newswire, there is a lack of resources for otherwise over-used Wikipedia texts. The corpus described in this paper addresses this issue. We present a freely available resource we initially devised for improving coreference resolution algorithms dedicated to Wikipedia texts. Our corpus has no restriction on the topics of the documents being annotated, and documents of various sizes have been considered for annotation.", "title": "" }, { "docid": "7313ab8f065b8cc167aa2d4cd999eae3", "text": "LossCalcTM version 2.0 is the Moody's KMV model to predict loss given default (LGD) or (1 recovery rate). Lenders and investors use LGD to estimate future credit losses. LossCalc is a robust and validated model of LGD for loans, bonds, and preferred stocks for the US, Canada, the UK, Continental Europe, Asia, and Latin America. It projects LGD for defaults occurring immediately and for defaults that may occur in one year. LossCalc is a statistical model that incorporates information at different levels: collateral, instrument, firm, industry, country, and the macroeconomy to predict LGD. It significantly improves on the use of historical recovery averages to predict LGD, helping institutions to better price and manage credit risk. LossCalc is built on a global dataset of 3,026 recovery observations for loans, bonds, and preferred stock from 1981-2004. This dataset includes over 1,424 defaults of both public and private firms—both rated and unrated instruments—in all industries. LossCalc will help institutions better manage their credit risk and can play a critical role in meeting the Basel II requirements on advanced Internal Ratings Based Approach. This paper describes Moody's KMV LossCalc, its predictive factors, the modeling approach, and its out of-time and out of-sample model validation. AUTHORS Greg M. Gupton Roger M. Stein", "title": "" }, { "docid": "b0a62d33cad605ba7ffad2ea62caa82c", "text": "We attempt to use DCGANs (deep convolutional generative adversarial nets) to tackle the automatic colorization of black and white photos to combat the tendency for vanilla neural nets to ”average out” the results. We construct a small feed-forward convolutional neural network as a baseline colorization system. We train the baseline model on the CIFAR-10 dataset with a per-pixel Euclidean loss function on the chrominance values and achieve sensible but mediocre results. We propose using the adversarial framework as proposed by Goodfellow et al. [5] as an alternative to the loss function—we reformulate the baseline model as a generator model that maps grayscale images and random noise input to the color image space, and construct a discriminator model that is trained to predict the probability that a given colorization was sampled from data distribution rather than generated by the generator model, conditioned on the grayscale image. We analyze the challenges that stand in the way of training adversarial networks, and suggest future steps to test the viability of the model.", "title": "" }, { "docid": "feb51135512c92eee0398748bf3c8b7e", "text": "The past 25 years has seen phenomenal growth of interest in judgemental approaches to forecasting and a significant change of attitude on the part of researchers to the role of judgement. While previously judgement was thought to be the enemy of accuracy, today judgement is recognised as an indispensable component of forecasting and much research attention has been directed at understanding and improving its use. Human judgement can be demonstrated to provide a significant benefit to forecasting accuracy but it can also be subject to many biases. Much of the research has been directed at understanding and managing these strengths and weaknesses. An indication of the explosion of research interest in this area can be gauged by the fact that over 200 studies are referenced in this review. D 2006 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9f037fd53e6547b689f88fc1c1bed10a", "text": "We study feature selection as a means to optimize the baseline clickbait detector employed at the Clickbait Challenge 2017 [6]. The challenge’s task is to score the “clickbaitiness” of a given Twitter tweet on a scale from 0 (no clickbait) to 1 (strong clickbait). Unlike most other approaches submitted to the challenge, the baseline approach is based on manual feature engineering and does not compete out of the box with many of the deep learning-based approaches. We show that scaling up feature selection efforts to heuristically identify better-performing feature subsets catapults the performance of the baseline classifier to second rank overall, beating 12 other competing approaches and improving over the baseline performance by 20%. This demonstrates that traditional classification approaches can still keep up with deep learning on this task.", "title": "" }, { "docid": "353d9add247202dc1a31f69064c68c5c", "text": "Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership.\n In this paper, we generalize the \"digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.", "title": "" }, { "docid": "245b313fa0a72707949f20c28ce7e284", "text": "We consider the class of Iterative Shrinkage-Thresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) which preserves the computational simplicity of ISTA, but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA.", "title": "" }, { "docid": "e029a189f85f9cb47a5ad0a766efad1d", "text": "\"Next generation\" data acquisition technologies are allowing scientists to collect exponentially more data at a lower cost. These trends are broadly impacting many scientific fields, including genomics, astronomy, and neuroscience. We can attack the problem caused by exponential data growth by applying horizontally scalable techniques from current analytics systems to accelerate scientific processing pipelines.\n In this paper, we describe ADAM, an example genomics pipeline that leverages the open-source Apache Spark and Parquet systems to achieve a 28x speedup over current genomics pipelines, while reducing cost by 63%. From building this system, we were able to distill a set of techniques for implementing scientific analyses efficiently using commodity \"big data\" systems. To demonstrate the generality of our architecture, we then implement a scalable astronomy image processing system which achieves a 2.8--8.9x improvement over the state-of-the-art MPI-based system.", "title": "" }, { "docid": "fd786ae1792e559352c75940d84600af", "text": "In this paper, we obtain an (1 − e−1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n) function value computations. c © 2003 Published by Elsevier B.V.", "title": "" } ]
scidocsrr
3a90dcbce0a014af18d08f61131e18aa
Virtual reality and tactile augmentation in the treatment of spider phobia: a case report.
[ { "docid": "c9b7832cd306fc022e4a376f10ee8fc8", "text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.", "title": "" } ]
[ { "docid": "101e93562935c799c3c3fa62be98bf09", "text": "This paper presents a technical approach to robot learning of motor skills which combines active intrinsically motivated learning with imitation learning. Our architecture, called SGIM-D, allows efficient learning of high-dimensional continuous sensorimotor inverse models in robots, and in particular learns distributions of parameterised motor policies that solve a corresponding distribution of parameterised goals/tasks. This is made possible by the technical integration of imitation learning techniques within an algorithm for learning inverse models that relies on active goal babbling. After reviewing social learning and intrinsic motivation approaches to action learning, we describe the general framework of our algorithm, before detailing its architecture. In an experiment where a robot arm has to learn to use a flexible fishing line , we illustrate that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation and benefits from human demonstration properties to learn how to produce varied outcomes in the environment, while developing more precise control policies in large spaces.", "title": "" }, { "docid": "8a322a2d1ea98a7232c37797d2db2bfa", "text": "The link between affect and student learning has been the subject of increasing attention in recent years. Affective states such as flow and curiosity tend to have positive correlations with learning while negative states such as boredom and frustration have the opposite effect. Student engagement and motivation have also been shown to be critical in improving learning gains with computer-based learning environments. Consequently, it is a design goal of many computer-based learning environments to encourage positive affect and engagement while students are learning. Game-based learning environments offer significant potential for increasing student engagement and motivation. However, it is unclear how affect and engagement interact with learning in game-based learning environments. This work presents an in-depth analysis of how these phenomena occur in the game-based learning environment, Crystal Island. The findings demonstrate that game-based learning environments can simultaneously support learning and promote positive affect and engagement.", "title": "" }, { "docid": "26a599c22c173f061b5d9579f90fd888", "text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto", "title": "" }, { "docid": "e9dcc0eb5894907142dffdf2aa233c35", "text": "The explosion of the web and the abundance of linked data demand for effective and efficient methods for storage, management and querying. More specifically, the ever-increasing size and number of RDF data collections raises the need for efficient query answering, and dictates the usage of distributed data management systems for effectively partitioning and querying them. To this direction, Apache Spark is one of the most active big-data approaches, with more and more systems adopting it, for efficient, distributed data management. The purpose of this paper is to provide an overview of the existing works dealing with efficient query answering, in the area of RDF data, using Apache Spark. We discuss on the characteristics and the key dimension of such systems, we describe novel ideas in the area, and the corresponding drawbacks, and provide directions for future work.", "title": "" }, { "docid": "576091bb08f9a37e0be8c38294e155e3", "text": "This research will demonstrate hacking techniques on the modern automotive network and describe the design and implementation of a benchtop simulator. In currently-produced vehicles, the primary network is based on the Controller Area Network (CAN) bus described in the ISO 11898 family of protocols. The CAN bus performs well in the electronically noisy environment found in the modern automobile. While the CAN bus is ideal for the exchange of information in this environment, when the protocol was designed security was not a priority due to the presumed isolation of the network. That assumption has been invalidated by recent, well-publicized attacks where hackers were able to remotely control an automobile, leading to a product recall that affected more than a million vehicles. The automobile has a multitude of electronic control units (ECUs) which are interconnected with the CAN bus to control the various systems which include the infotainment, light, and engine systems. The CAN bus allows the ECUs to share information along a common bus which has led to improvements in fuel and emission efficiency, but has also introduced vulnerabilities by giving access on the same network to cyber-physical systems (CPS). These CPS systems include the anti-lock braking systems (ABS) and on late model vehicles the ability to turn the steering wheel and control the accelerator. Testing functionality on an operational vehicle can be dangerous and place others in harm's way, but simulating the vehicle network and functionality of the ECUs on a bench-top system provides a safe way to test for vulnerabilities and to test possible security solutions to prevent CPS access over the CAN bus network. This paper will describe current research on the automotive network, provide techniques in capturing network traffic for playback, and demonstrate the design and implementation of a benchtop system for continued research on the CAN bus.", "title": "" }, { "docid": "9218a87b0fba92874e5f7917c925843a", "text": "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than 1% of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any which have been previously learned from human feedback.", "title": "" }, { "docid": "031dbd65ecb8d897d828cd5d904059c1", "text": "Especially in ill-defined problems like complex, real-world tasks more than one way leads to a solution. Until now, the evaluation of information visualizations was often restricted to measuring outcomes only (time and error) or insights into the data set. A more detailed look into the processes which lead to or hinder task completion is provided by analyzing users' problem solving strategies. A study illustrates how they can be assessed and how this knowledge can be used in participatory design to improve a visual analytics tool. In order to provide the users a tool which functions as a real scaffold, it should allow them to choose their own path to Rome. We discuss how evaluation of problem solving strategies can shed more light on the users' \"exploratory minds\".", "title": "" }, { "docid": "4019beb9fa6ec59b4b19c790fe8ff832", "text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.", "title": "" }, { "docid": "d81a287ab942c60980b0599007e1a2d6", "text": "MicroRNAs (miRNAs) are small and non-coding RNA molecules that inhibit gene expression posttranscriptionally. They play important roles in several biological processes, and in recent years there has been an interest in studying how they are related to the pathogenesis of diseases. Although there are already some databases that contain information for miRNAs and their relation with illnesses, their curation represents a significant challenge due to the amount of information that is being generated every day. In particular, respiratory diseases are poorly documented in databases, despite the fact that they are of increasing concern regarding morbidity, mortality and economic impacts. In this work, we present the results that we obtained in the BioCreative Interactive Track (IAT), using a semiautomatic approach for improving biocuration of miRNAs related to diseases. Our procedures will be useful to complement databases that contain this type of information. We adapted the OntoGene text mining pipeline and the ODIN curation system in a full-text corpus of scientific publications concerning one specific respiratory disease: idiopathic pulmonary fibrosis, the most common and aggressive of the idiopathic interstitial cases of pneumonia. We curated 823 miRNA text snippets and found a total of 246 miRNAs related to this disease based on our semiautomatic approach with the system OntoGene/ODIN. The biocuration throughput improved by a factor of 12 compared with traditional manual biocuration. A significant advantage of our semiautomatic pipeline is that it can be applied to obtain the miRNAs of all the respiratory diseases and offers the possibility to be used for other illnesses.\n\n\nDatabase URL\nhttp://odin.ccg.unam.mx/ODIN/bc2015-miRNA/.", "title": "" }, { "docid": "81ec86a4e13c4a7fb7f0352ac08938ab", "text": "Although experimental studies support that men generally respond more to visual sexual stimuli than do women, there is substantial variability in this effect. One potential source of variability is the type of stimuli used that may not be of equal interest to both men and women whose preferences may be dependent upon the activities and situations depicted. The current study investigated whether men and women had preferences for certain types of stimuli. We measured the subjective evaluations and viewing times of 15 men and 30 women (15 using hormonal contraception) to sexually explicit photos. Heterosexual participants viewed 216 pictures that were controlled for the sexual activity depicted, gaze of the female actor, and the proportion of the image that the genital region occupied. Men and women did not differ in their overall interest in the stimuli, indicated by equal subjective ratings and viewing times, although there were preferences for specific types of pictures. Pictures of the opposite sex receiving oral sex were rated as least sexually attractive by all participants and they looked longer at pictures showing the female actor's body. Women rated pictures in which the female actor was looking indirectly at the camera as more attractive, while men did not discriminate by female gaze. Participants did not look as long at close-ups of genitals, and men and women on oral contraceptives rated genital images as less sexually attractive. Together, these data demonstrate sex-specific preferences for specific types of stimuli even when, across stimuli, overall interest was comparable.", "title": "" }, { "docid": "c78d098573cf34885e32e5aea2fbdaa7", "text": "The design of 3D printable accessible tactile pictures (3DP-ATPs) for young children with visual impairments has the potential to greatly increase the supply of tactile materials that can be used to support emergent literacy skill development. Many caregivers and stakeholders invested in supporting young children with visual impairments have shown interest in using 3D printing to make accessible tactile materials. Unfortunately, the task of designing and producing 3DP-ATPs is far more complex than simply learning to use personal fabrication tools. This paper presents formative research conducted to investigate how six caregiver stakeholder-groups, with diverse skillsets and domain interests, attempt to create purposeful 3DP-ATPs with amateur-focused 3D modeling programs. We expose the experiences of these stakeholder groups as they attempt to design 3DP-ATG for the first time. We discuss how the participant groups practically and conceptually approach the task and focus their design work. Each group demonstrated different combinations of skillsets. In turn, we identify the common activities required of the design task as well how different participants are well suited and motivated to preform those activities. This study suggests that the emerging community of amateur 3DP-ATP designers may benefit from an online creativity support tool to help offset the challenges of designing purposeful 3DP-ATPs that are designed to meet individual children with VI's emergent literacy needs.", "title": "" }, { "docid": "b32e1d3474c5db96f188981b29cbb9c0", "text": "An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles. In this paper, we demonstrate a construction that successfully fools two standard detectors, Faster RCNN and YOLO. The existence of such examples is surprising, as attacking a classifier is very different from attacking a detector, and that the structure of detectors – which must search for their own bounding box, and which cannot estimate that box very accurately – makes it quite likely that adversarial patterns are strongly disrupted. We show that our construction produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed. We also show that our construction yields physical objects that are adversarial.", "title": "" }, { "docid": "efced3407e46faf9fa43ce299add28f4", "text": "This is a pilot study of the use of “Flash cookies” by popular websites. We find that more than 50% of the sites in our sample are using Flash cookies to store information about the user. Some are using it to “respawn” or re-instantiate HTTP cookies deleted by the user. Flash cookies often share the same values as HTTP cookies, and are even used on government websites to assign unique values to users. Privacy policies rarely disclose the presence of Flash cookies, and user controls for effectuating privacy preferences are", "title": "" }, { "docid": "38d650cb945dc50d97762186585659a4", "text": "Sustainable biofuels, biomaterials, and fine chemicals production is a critical matter that research teams around the globe are focusing on nowadays. Polyhydroxyalkanoates represent one of the biomaterials of the future due to their physicochemical properties, biodegradability, and biocompatibility. Designing efficient and economic bioprocesses, combined with the respective social and environmental benefits, has brought together scientists from different backgrounds highlighting the multidisciplinary character of such a venture. In the current review, challenges and opportunities regarding polyhydroxyalkanoate production are presented and discussed, covering key steps of their overall production process by applying pure and mixed culture biotechnology, from raw bioprocess development to downstream processing.", "title": "" }, { "docid": "de364eb64d2377c278cd71d98c2c0729", "text": "In recent years methods of data analysis for point processes have received some attention, for example, by Cox & Lewis (1966) and Lewis (1964). In particular Bartlett (1963a, b) has introduced methods of analysis based on the point spectrum. Theoretical models are relatively sparse. In this paper the theoretical properties of a class of processes with particular reference to the point spectrum or corresponding covariance density functions are discussed. A particular result is a self-exciting process with the same second-order properties as a certain doubly stochastic process. These are not distinguishable by methods of data analysis based on these properties.", "title": "" }, { "docid": "29975df3948fdc58d8d2adfe2c72103f", "text": "Antibiotic licensing studies remain a problem in neonates. The classical adult clinical syndrome-based licensing studies do not apply to neonates, where sepsis is the most common infection. The main obstacle to conducting neonatal antibiotic trials is a lack of consensus on the definition of neonatal sepsis itself and the selection of appropriate endpoints. This article describes the difficulties of the clinical and laboratory definitions of neonatal sepsis and reviews the varying designs of previous neonatal sepsis trials. The optimal design of future trials of new antibiotics will need to be based on pharmacokinetic/pharmacodynamic parameters, combined with adequately powered clinical studies to determine safety and efficacy.", "title": "" }, { "docid": "27647d270fc085daedcf150dabb2e7c9", "text": "Obesity is reaching epidemic proportions and is a strong risk factor for a number of cardiovascular and metabolic disorders such as hypertension, type 2 diabetes, dyslipidemia, atherosclerosis, and also certain types of cancers. Despite the constant recommendations of health care organizations regarding the importance of weight control, this goal often fails. Genetic predisposition in combination with inactive lifestyles and high caloric intake leads to excessive weight gain. Even though there may be agreement about the concept that lifestyle changes affecting dietary habits and physical activity are essential to promote weight loss and weight control, the ideal amount and type of exercise and also the ideal diet are still under debate. For many years, nutritional intervention studies have been focused on reducing dietary fat with little positive results over the long-term. One of the most studied strategies in the recent years for weight loss is the ketogenic diet. Many studies have shown that this kind of nutritional approach has a solid physiological and biochemical basis and is able to induce effective weight loss along with improvement in several cardiovascular risk parameters. This review discusses the physiological basis of ketogenic diets and the rationale for their use in obesity, discussing the strengths and the weaknesses of these diets together with cautions that should be used in obese patients.", "title": "" }, { "docid": "916a76aa0c4209567a6309885e0b9b32", "text": "The term \"Industry 4.0\" symbolizes new forms of technology and artificial intelligence within production technologies. Smart robots are going to be the game changers within the factories of the future and will work with humans in indispensable teams within many processes. With this fourth industrial revolution, classical production lines are going through comprehensive modernization, e.g. in terms of in-the-box manufacturing, where humans and machines work side by side in so-called \"hybrid teams\". Questions about how to prepare for newly needed engineering competencies for the age of Industry 4.0, how to assess them and how to teach and train e.g. human-robot-teams have to be tackled in future engineering education. The paper presents theoretical aspects and empirical results of a series of studies, carried out to investigate the competencies of virtual collaboration and joint problem solving in virtual worlds.", "title": "" }, { "docid": "6b7c7c075d21dc142661c48bebb78dc4", "text": "In order to convey the most content in their limited space, advertisements embed references to outside knowledge via symbolism. For example, a motorcycle stands for adventure (a positive property the ad wants associated with the product being sold), and a gun stands for danger (a negative property to dissuade viewers from undesirable behaviors). We show how to use symbolic references to better understand the meaning of an ad. We further show how anchoring ad understanding in general-purpose object recognition and image captioning improves results. We formulate the ad understanding task as matching the ad image to human-generated statements that describe the action that the ad prompts, and the rationale it provides for taking this action. Our proposed method outperforms the state of the art on this task, and on an alternative formulation of question-answering on ads. We show additional applications of our learned representations for matching ads to slogans, and clustering ads according to their topic, without extra training.", "title": "" } ]
scidocsrr
b14b8bbc154551465e9894bb5187125c
Coherent and Noncoherent Dictionaries for Action Recognition
[ { "docid": "c1f6052ecf802f1b4b2e9fd515d7ea15", "text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.", "title": "" }, { "docid": "b50c0f5bd7ee7b0fbcc77934a600f7d4", "text": "Local feature descriptors underpin many diverse applications, supporting object recognition, image registration, database search, 3D reconstruction, and more. The recent phenomenal growth in mobile devices and mobile computing in general has created demand for descriptors that are not only discriminative, but also compact in size and fast to extract and match. In response, a large number of binary descriptors have been proposed, each claiming to overcome some limitations of the predecessors. This paper provides a comprehensive evaluation of several promising binary designs. We show that existing evaluation methodologies are not sufficient to fully characterize descriptors’ performance and propose a new evaluation protocol and a challenging dataset. In contrast to the previous reviews, we investigate the effects of the matching criteria, operating points, and compaction methods, showing that they all have a major impact on the systems’ design and performance. Finally, we provide descriptor extraction times for both general-purpose systems and mobile devices, in order to better understand the real complexity of the extraction task. The objective is to provide a comprehensive reference and a guide that will help in selection and design of the future descriptors.", "title": "" }, { "docid": "a25338ae0035e8a90d6523ee5ef667f7", "text": "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2% on KTH (better by 3.3%), 95.0% on UCF Sports (better by 3.7%), 57.9% on UCF50 (baseline is 47.9%), and 26.9% on HMDB51 (baseline is 23.2%). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.", "title": "" } ]
[ { "docid": "396f6b6c09e88ca8e9e47022f1ae195b", "text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.", "title": "" }, { "docid": "9c9e36a64d82beada8807546636aef20", "text": "Nowadays, FMCW (Frequency Modulated Continuous Wave) radar is widely adapted due to the use of solid state microwave amplifier to generate signal source. The FMCW radar can be implemented and analyzed at low cost and less complexity by using Software Defined Radio (SDR). In this paper, SDR based FMCW radar for target detection and air traffic control radar application is implemented in real time. The FMCW radar model is implemented using open source software and hardware. GNU Radio is utilized for software part of the radar and USRP (Universal Software Radio Peripheral) N210 for hardware part. Log-periodic antenna operating at 1GHZ frequency is used for transmission and reception of radar signals. From the beat signal obtained at receiver end and range resolution of signal, target is detected. Further low pass filtering followed by Fast Fourier Transform (FFT) is performed to reduce computational complexity.", "title": "" }, { "docid": "88e1eaad5cfc5aded16f588cd10cb244", "text": "BACKGROUND AND AIMS\nIntestinal barrier impairment is incriminated in the pathophysiology of intestinal gut disorders associated with psychiatric comorbidity. Increased intestinal permeability associated with upload of lipopolysaccharides (LPS) translocation induces depressive symptoms. Gut microbiota and probiotics alter behavior and brain neurochemistry. Since Lactobacillus farciminis suppresses stress-induced hyperpermeability, we examined whether (i) L. farciminis affects the HPA axis stress response, (ii) stress induces changes in LPS translocation and central cytokine expression which may be reversed by L. farciminis, (iii) the prevention of \"leaky\" gut and LPS upload are involved in these effects.\n\n\nMETHODS\nAt the end of the following treatments female rats were submitted to a partial restraint stress (PRS) or sham-PRS: (i) oral administration of L. farciminis during 2 weeks, (ii) intraperitoneal administration of ML-7 (a specific myosin light chain kinase inhibitor), (iii) antibiotic administration in drinking water during 12 days. After PRS or sham-PRS session, we evaluated LPS levels in portal blood, plasma corticosterone and adrenocorticotropic hormone (ACTH) levels, hypothalamic corticotropin releasing factor (CRF) and pro-inflammatory cytokine mRNA expression, and colonic paracellular permeability (CPP).\n\n\nRESULTS\nPRS increased plasma ACTH and corticosterone; hypothalamic CRF and pro-inflammatory cytokine expression; CPP and portal blood concentration of LPS. L. farciminis and ML-7 suppressed stress-induced hyperpermeability, endotoxemia and prevented HPA axis stress response and neuroinflammation. Antibiotic reduction of luminal LPS concentration prevented HPA axis stress response and increased hypothalamic expression of pro-inflammatory cytokines.\n\n\nCONCLUSION\nThe attenuation of the HPA axis response to stress by L. farciminis depends upon the prevention of intestinal barrier impairment and decrease of circulating LPS levels.", "title": "" }, { "docid": "19dea4fca2a60fad4b360d34b15480ae", "text": "We present Neural Autoregressive Distribution Estimation (NADE) models, which are neural network architectures applied to the problem of unsupervised distribution and density estimation. They leverage the probability product rule and a weight sharing scheme inspired from restricted Boltzmann machines, to yield an estimator that is both tractable and has good generalization performance. We discuss how they achieve competitive performance in modeling both binary and real-valued observations. We also present how deep NADE models can be trained to be agnostic to the ordering of input dimensions used by the autoregressive product rule decomposition. Finally, we also show how to exploit the topological structure of pixels in images using a deep convolutional architecture for NADE.", "title": "" }, { "docid": "2e3cee13657129d26ec236f9d2641e6c", "text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds", "title": "" }, { "docid": "9800cb574743679b4517818c9653ada5", "text": "This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs). Unlike existing methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We minimize the reconstruction error of the nonlinear responses, subject to a low-rank constraint which helps to reduce the complexity of filters. We develop an effective solution to this constrained nonlinear optimization problem. An algorithm is also presented for reducing the accumulated error when multiple layers are approximated. A whole-model speedup ratio of 4× is demonstrated on a large network trained for ImageNet, while the top-5 error rate is only increased by 0.9%. Our accelerated model has a comparably fast speed as the “AlexNet” [11], but is 4.7% more accurate.", "title": "" }, { "docid": "d0b2999de796ec3215513536023cc2be", "text": "Recently proposed machine comprehension (MC) application is an effort to deal with natural language understanding problem. However, the small size of machine comprehension labeled data confines the application of deep neural networks architectures that have shown advantage in semantic inference tasks. Previous methods use a lot of NLP tools to extract linguistic features but only gain little improvement over simple baseline. In this paper, we build an attention-based recurrent neural network model, train it with the help of external knowledge which is semantically relevant to machine comprehension, and achieves a new state-of-the-art result.", "title": "" }, { "docid": "d603806f579a937a24ad996543fe9093", "text": "Early vision relies heavily on rectangular windows for tasks such as smoothing and computing correspondence. While rectangular windows are efficient, they yield poor results near object boundaries. We describe an efficient method for choosing an arbitrarily shaped connected window, in a manner which varies at each pixel. Our approach can be applied to many problems, including image restoration and visual correspondence. It runs in linear time, and takes a few seconds on traditional benchmark images. Performance on both synthetic and real imagery with ground truth appears promising.", "title": "" }, { "docid": "c5cb0ae3102fcae584e666a1ba3e73ed", "text": "A new generation of computational cameras is emerging, spawned by the introduction of the Lytro light-field camera to the consumer market and recent accomplishments in the speed at which light can be captured. By exploiting the co-design of camera optics and computational processing, these cameras capture unprecedented details of the plenoptic function: a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. Although digital light sensors have greatly evolved in the last years, the visual information captured by conventional cameras has remained almost unchanged since the invention of the daguerreotype. All standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons. In the process, all visual information is irreversibly lost, except for a two-dimensional, spatially varying subset: the common photograph.\n This course reviews the plenoptic function and discusses approaches for optically encoding high-dimensional visual information that is then recovered computationally in post-processing. It begins with an overview of the plenoptic dimensions and shows how much of this visual information is irreversibly lost in conventional image acquisition. Then it discusses the state of the art in joint optical modulation and computation reconstruction for acquisition of high-dynamic-range imagery and spectral information. It unveils the secrets behind imaging techniques that have recently been featured in the news and outlines other aspects of light that are of interest for various applications before concluding with question, answers, and a short discussion.", "title": "" }, { "docid": "28e538dcdcfed7693f0c1e4fe4d29c94", "text": "The data used in the test consisted of 500 pages selected at random from a collection of approximately 2,500 documents containing 100,000 pages. The documents in this collection were chosen by the U.S. Department of Energy (DOE) to represent the kinds of documents from which the DOE plans to build large, full-text retrieval databases using OCR for document conversion. The documents are mostly scientific and technical papers [Nartker 92].", "title": "" }, { "docid": "f1bc297544e333f08387cfd410e1dc75", "text": "Cascades are ubiquitous in various network environments. How to predict these cascades is highly nontrivial in several vital applications, such as viral marketing, epidemic prevention and traffic management. Most previous works mainly focus on predicting the final cascade sizes. As cascades are typical dynamic processes, it is always interesting and important to predict the cascade size at any time, or predict the time when a cascade will reach a certain size (e.g. an threshold for outbreak). In this paper, we unify all these tasks into a fundamental problem: cascading process prediction. That is, given the early stage of a cascade, how to predict its cumulative cascade size of any later time? For such a challenging problem, how to understand the micro mechanism that drives and generates the macro phenomena (i.e. cascading process) is essential. Here we introduce behavioral dynamics as the micro mechanism to describe the dynamic process of a node's neighbors getting infected by a cascade after this node getting infected (i.e. one-hop subcascades). Through data-driven analysis, we find out the common principles and patterns lying in behavioral dynamics and propose a novel Networked Weibull Regression model for behavioral dynamics modeling. After that we propose a novel method for predicting cascading processes by effectively aggregating behavioral dynamics, and present a scalable solution to approximate the cascading process with a theoretical guarantee. We extensively evaluate the proposed method on a large scale social network dataset. The results demonstrate that the proposed method can significantly outperform other state-of-the-art baselines in multiple tasks including cascade size prediction, outbreak time prediction and cascading process prediction.", "title": "" }, { "docid": "c28ee3a41d05654eedfd379baf2d5f24", "text": "The problem of classifying subjects into disease categories is of common occurrence in medical research. Machine learning tools such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Logistic Regression (LR) and Fisher’s Linear Discriminant Analysis (LDA) are widely used in the areas of prediction and classification. The main objective of these competing classification strategies is to predict a dichotomous outcome (e.g. disease/healthy) based on several features.", "title": "" }, { "docid": "eee3cbeb230fb5bc454e5850bb007169", "text": "Unicycle mobile robot is wheeled mobile robot that can stand and move around using one wheel. It has attached a lot of researchers to conduct studies about the system, particularly in the design of the system mechanisms and the control strategies. Unlike two wheel balancing mobile robot which mechanically stable on one side, unicycle mobile robot requires additional mechanisms to keep balancing robot on all sides. By assuming that both roll dynamics and pitch dynamics are decoupled, so the balancing mechanisms can be designed separately. The reaction wheel is used for obtaining balancing on the roll angle by rotating the disc to generate momentum. While the wheeled robot is used for obtaining balancing on the pitch angle by rotating wheel to move forward or backward. A PID controller is used as balancing control which will control the rotation motor on the reaction disc and wheel based on the pitch and roll feedback from the sensor. By adding the speed controller to the pitch control, the system will compensate automatically for perfectly center of gravity on the robot. Finally, the unicycle robot will be able to balance on pitch angle and roll angle. Based on simulation result validates that robot can balance using PID controller, while based on balancing pitch experiment result, robot can achieve balancing with maximum inclination about ±23 degree on pitch angle and ±3.5 degree on roll angle with steady state error 0.1 degree.", "title": "" }, { "docid": "27834a3ad7148d4174a289580ef9f514", "text": "We explore the power of spatial context as a self-supervisory signal for learning visual representations. In particular, we propose spatial context networks that learn to predict a representation of one image patch from another image patch, within the same image, conditioned on their real-valued relative spatial offset. Unlike auto-encoders, that aim to encode and reconstruct original image patches, our network aims to encode and reconstruct intermediate representations of the spatially offset patches. As such, the network learns a spatially conditioned contextual representation. By testing performance with various patch selection mechanisms we show that focusing on object-centric patches is important, and that using object proposal as a patch selection mechanism leads to the highest improvement in performance. Further, unlike auto-encoders, context encoders [21], or other forms of unsupervised feature learning, we illustrate that contextual supervision (with pre-trained model initialization) can improve on existing pre-trained model performance. We build our spatial context networks on top of standard VGG_19 and CNN_M architectures and, among other things, show that we can achieve improvements (with no additional explicit supervision) over the original ImageNet pre-trained VGG_19 and CNN_M models in object categorization and detection on VOC2007.", "title": "" }, { "docid": "d5330600041fd35290004a74aa38a7da", "text": "We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a model’s response to the questions. The EpiReader is an end-to-end neural model comprising two components: the first component proposes a small set of candidate answers after comparing a question to its supporting text, and the second component formulates hypotheses using the proposed candidates and the question, then reranks the hypotheses based on their estimated concordance with the supporting text. We present experiments demonstrating that the EpiReader sets a new state-of-the-art on the CNN and Children’s Book Test machine comprehension benchmarks, outperforming previous neural models by a significant margin.", "title": "" }, { "docid": "e68992d53fa5bac20f8a4f17d72c7d0d", "text": "In the field of pattern recognition, data analysis, and machine learning, data points are usually modeled as high-dimensional vectors. Due to the curse-of-dimensionality, it is non-trivial to efficiently process the orginal data directly. Given the unique properties of nonlinear dimensionality reduction techniques, nonlinear learning methods are widely adopted to reduce the dimension of data. However, existing nonlinear learning methods fail in many real applications because of the too-strict requirements (for real data) or the difficulty in parameters tuning. Therefore, in this paper, we investigate the manifold learning methods which belong to the family of nonlinear dimensionality reduction methods. Specifically, we proposed a new manifold learning principle for dimensionality reduction named Curved Cosine Mapping (CCM). Based on the law of cosines in Euclidean space, CCM applies a brand new mapping pattern to manifold learning. In CCM, the nonlinear geometric relationships are obtained by utlizing the law of cosines, and then quantified as the dimensionality-reduced features. Compared with the existing approaches, the model has weaker theoretical assumptions over the input data. Moreover, to further reduce the computation cost, an optimized version of CCM is developed. Finally, we conduct extensive experiments over both artificial and real-world datasets to demonstrate the performance of proposed techniques.", "title": "" }, { "docid": "bfee1553c6207909abc9820e741d6e01", "text": "Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic technique that integrates data encryption with access control for ensuring data security in IoT systems. However, the efficiency problem of CP-ABE is still a bottleneck limiting its development and application. A widespread consensus is that the computation overhead of bilinear pairing is excessive in the practical application of ABE, especially for the devices or the processors with limited computational resources and power supply. In this paper, we proposed a novel pairing-free data access control scheme based on CP-ABE using elliptic curve cryptography, abbreviated PF-CP-ABE. We replace complicated bilinear pairing with simple scalar multiplication on elliptic curves, thereby reducing the overall computation overhead. And we designed a new way of key distribution that it can directly revoke a user or an attribute without updating other users’ keys during the attribute revocation phase. Besides, our scheme use linear secret sharing scheme access structure to enhance the expressiveness of the access policy. The security and performance analysis show that our scheme significantly improved the overall efficiency as well as ensured the security.", "title": "" }, { "docid": "80f31bb04f4714d7a14499d5d97be8da", "text": "We investigate the importance of text analysis for stock price prediction. In particular, we introduce a system that forecasts companies’ stock price changes (UP, DOWN, STAY) in response to financial events reported in 8-K documents. Our results indicate that using text boosts prediction accuracy over 10% (relative) over a strong baseline that incorporates many financially-rooted features. This impact is most important in the short term (i.e., the next day after the financial event) but persists for up to five days.", "title": "" }, { "docid": "cdb0e65f89f94e436e8c798cd0188d66", "text": "Visual storytelling aims to generate human-level narrative language (i.e., a natural paragraph with multiple sentences) from a photo streams. A typical photo story consists of a global timeline with multi-thread local storylines, where each storyline occurs in one different scene. Such complex structure leads to large content gaps at scene transitions between consecutive photos. Most existing image/video captioning methods can only achieve limited performance, because the units in traditional recurrent neural networks (RNN) tend to “forget” the previous state when the visual sequence is inconsistent. In this paper, we propose a novel visual storytelling approach with Bidirectional Multi-thread Recurrent Neural Network (BMRNN). First, based on the mined local storylines, a skip gated recurrent unit (sGRU) with delay control is proposed to maintain longer range visual information. Second, by using sGRU as basic units, the BMRNN is trained to align the local storylines into the global sequential timeline. Third, a new training scheme with a storyline-constrained objective function is proposed by jointly considering both global and local matches. Experiments on three standard storytelling datasets show that the BMRNN model outperforms the state-of-the-art methods.", "title": "" } ]
scidocsrr
141d46d7a3e8ef19846932e44e7d0da4
Automatic number plate recognition system by character position method
[ { "docid": "b4316fcbc00b285e11177811b61d2b99", "text": "Automatic license plate recognition (ALPR) is one of the most important aspects of applying computer techniques towards intelligent transportation systems. In order to recognize a license plate efficiently, however, the location of the license plate, in most cases, must be detected in the first place. Due to this reason, detecting the accurate location of a license plate from a vehicle image is considered to be the most crucial step of an ALPR system, which greatly affects the recognition rate and speed of the whole system. In this paper, a region-based license plate detection method is proposed. In this method, firstly, mean shift is used to filter and segment a color vehicle image in order to get candidate regions. These candidate regions are then analyzed and classified in order to decide whether a candidate region contains a license plate. Unlike other existing license plate detection methods, the proposed method focuses on regions, which demonstrates to be more robust to interference characters and more accurate when compared with other methods.", "title": "" } ]
[ { "docid": "f5bc721d2b63912307c4ad04fb78dd2c", "text": "When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament st reotype threat and hypothesize that the apprehension it causes may disrupt women’s math performance. In Study 1 we demonstrated that the pattern observed in the literature that women underperform on difficult (but not easy) math tests was observed among a highly selected sample of men and women. In Study 2 we demonstrated that this difference in performance could be eliminated when we lowered stereotype threat by describing the test as not producing gender differences. However, when the test was described as producing gender differences and stereotype threat was high, women performed substantially worse than equally qualified men did. A third experiment replicated this finding with a less highly selected population and explored the mediation of the effect. The implication that stereotype threat may underlie gender differences in advanced math performance, even", "title": "" }, { "docid": "dc2770a8318dd4aa1142efebe5547039", "text": "The purpose of this study was to describe how reaching onset affects the way infants explore objects and their own bodies. We followed typically developing infants longitudinally from 2 through 5 months of age. At each visit we coded the behaviors infants performed with their hand when an object was attached to it versus when the hand was bare. We found increases in the performance of most exploratory behaviors after the emergence of reaching. These increases occurred both with objects and with bare hands. However, when interacting with objects, infants performed the same behaviors they performed on their bare hands but they performed them more often and in unique combinations. The results support the tenets that: (1) the development of object exploration begins in the first months of life as infants learn to selectively perform exploratory behaviors on their bodies and objects, (2) the onset of reaching is accompanied by significant increases in exploration of both objects and one's own body, (3) infants adapt their self-exploratory behaviors by amplifying their performance and combining them in unique ways to interact with objects.", "title": "" }, { "docid": "53b32fc7358444a965f1fb936c9050ed", "text": "PURPOSE\nTo determine whether improved functional balance through a Tai Chi intervention is related to subsequent reductions in falls among elderly persons.\n\n\nMETHODS\nTwo hundred fifty-six healthy, physically inactive older adults aged 70-92 (mean age +/- SD = 77.48 +/- 4.95), recruited from a local health system in Portland, OR, participated in a 6-month randomized controlled trial, with allocation to Tai Chi or exercise stretching control, followed by a 6-month postintervention follow-up. Functional balance measures included Berg balance scale, dynamic gait index, and functional reach, assessed during the 6-month intervention period (baseline, 3-month, and 6-month intervention endpoint) and again at the 6-month postintervention follow-up. Fall counts were recorded during the 6-month postintervention follow-up period. Data were analyzed through intention-to-treat analysis of variance and logistic regression procedures.\n\n\nRESULTS\nTai Chi participants who showed improvements in measures of functional balance at the intervention endpoint significantly reduced their risk of falls during the 6-month postintervention period, compared with those in the control condition (odds ratio (OR), 0.27, 95% confidence interval (CI), 0.07-0.96 for Berg balance scale; OR, 0.27, 95% CI, 0.09-0.87 for dynamic gait index; OR, 0.20, 95% CI, 0.05-0.82 for functional reach).\n\n\nCONCLUSIONS\nImproved functional balance through Tai Chi training is associated with subsequent reductions in fall frequency in older persons.", "title": "" }, { "docid": "8e0b61e82179cc39b4df3d06448a3d14", "text": "The antibacterial activity and antioxidant effect of the compounds α-terpineol, linalool, eucalyptol and α-pinene obtained from essential oils (EOs), against pathogenic and spoilage forming bacteria were determined. The antibacterial activities of these compounds were observed in vitro on four Gram-negative and three Gram-positive strains. S. putrefaciens was the most resistant bacteria to all tested components, with MIC values of 2% or higher, whereas E. coli O157:H7 was the most sensitive strain among the tested bacteria. Eucalyptol extended the lag phase of S. Typhimurium, E. coli O157:H7 and S. aureus at the concentrations of 0.7%, 0.6% and 1%, respectively. In vitro cell growth experiments showed the tested compounds had toxic effects on all bacterial species with different level of potency. Synergistic and additive effects were observed at least one dose pair of combination against S. Typhimurium, E. coli O157:H7 and S. aureus, however antagonistic effects were not found in these combinations. The results of this first study are encouraging for further investigations on mechanisms of antimicrobial activity of these EO components.", "title": "" }, { "docid": "cfc884f446a878df78b32203d7dfde18", "text": "We consider the problems of motion-compensated frame interpolation (MCFI) and bidirectional prediction in a video coding environment. These applications generally require good motion estimates at the decoder. In this paper, we use a multiscale optical-ow-based motion estimator that provides smooth, natural motion elds under bit-rate constraints. These motion estimates scale well with change in temporal resolution and provide considerable exibility in the design and operation of coders and decoders. In the MCFI application, this estimator provides excellent interpolated frames that are superior to conventional motion estimators, both visually and in terms of PSNR. We also consider the eeect of occlusions in the bidirectional prediction application, and introduce a dense label eld that complements our motion estimator. This label eld enables us to adaptively weight the forward and backward predictions, and gives us substantial visual and PSNR improvements in the covered/uncovered regions of the sequence.", "title": "" }, { "docid": "fa0c62b91643a45a5eff7c1b1fa918f1", "text": "This paper presents outdoor field experimental results to clarify the 4x4 MIMO throughput performance from applying multi-point transmission in the 15 GHz frequency band in the downlink of 5G cellular radio access system. The experimental results in large-cell scenario shows that up to 30 % throughput gain compared to non-multi-point transmission is achieved although the difference for the RSRP of two TPs is over 10 dB, so that the improvement for the antenna correlation is achievable and important aspect for the multi-point transmission in the 15 GHz frequency band as well as the improvement of the RSRP. Furthermore in small-cell scenario, the throughput gain of 70% and over 5 Gbps are achieved applying multi-point transmission in the condition of two different MIMO streams transmission from a single TP as distributed MIMO instead of four MIMO streams transmission from a single TP.", "title": "" }, { "docid": "4e9af88f6def28991568f91a03a65a50", "text": "Reinforcement learning (RL) is an area of research that has blossomed tremendously in recent years and has shown remarkable potential for artificial intelligence based opponents in computer games. This success is primarily due to the vast capabilities of convolutional neural networks, that can extract useful features from noisy and complex data. Games are excellent tools to test and push the boundaries of novel RL algorithms because they give valuable insight into how well an algorithm can perform in isolated environments without the real-life consequences. Real-time strategy games (RTS) is a genre that has tremendous complexity and challenges the player in short and long-term planning. There is much research that focuses on applied RL in RTS games, and novel advances are therefore anticipated in the not too distant future. However, there are to date few environments for testing RTS AIs. Environments in the literature are often either overly simplistic, such as microRTS, or complex and without the possibility for accelerated learning on consumer hardware like StarCraft II. This paper introduces the Deep RTS game environment for testing cutting-edge artificial intelligence algorithms for RTS games. Deep RTS is a high-performance RTS game made specifically for artificial intelligence research. It supports accelerated learning, meaning that it can learn at a magnitude of 50 000 times faster compared to existing RTS games. Deep RTS has a flexible configuration, enabling research in several different RTS scenarios, including partially observable state-spaces and map complexity. We show that Deep RTS lives up to our promises by comparing its performance with microRTS, ELF, and StarCraft II on high-end consumer hardware. Using Deep RTS, we show that a Deep Q-Network agent beats random-play agents over 70% of the time. Deep RTS is publicly available at https://github.com/cair/DeepRTS.", "title": "" }, { "docid": "166fb2f5f0667e6c72ee06c7b18b303b", "text": "The goal of metalearning is to generate useful shifts of inductive bias by adapting the current learning strategy in a \\useful\" way. Our learner leads a single life during which actions are continually executed according to the system's internal state and current policy (a modiiable, probabilistic algorithm mapping environmental inputs and internal states to outputs and new internal states). An action is considered a learning algorithm if it can modify the policy. EEects of learning processes on later learning processes are measured using reward/time ratios. Occasional backtracking enforces success histories of still valid policy modiications corresponding to histories of lifelong reward accelerations. The principle allows for plugging in a wide variety of learning algorithms. In particular, it allows for embedding the learner's policy modiication strategy within the policy itself (self-reference). To demonstrate the principle's feasibility in cases where conventional reinforcement learning fails, we test it in complex, non-Markovian, changing environments (\\POMDPs\"). One of the tasks involves more than 10 13 states, two learners that both cooperate and compete, and strongly delayed reinforcement signals (initially separated by more than 300,000 time steps). The biggest diierence between time and space is that you can't reuse time.", "title": "" }, { "docid": "bd4234dc626b4c56d0170948ac5d5de3", "text": "ISSN: 1049-4820 (Print) 1744-5191 (Online) Journal homepage: http://www.tandfonline.com/loi/nile20 Gamification and student motivation Patrick Buckley & Elaine Doyle To cite this article: Patrick Buckley & Elaine Doyle (2016) Gamification and student motivation, Interactive Learning Environments, 24:6, 1162-1175, DOI: 10.1080/10494820.2014.964263 To link to this article: https://doi.org/10.1080/10494820.2014.964263", "title": "" }, { "docid": "d3fd8c1ce41892f54aedff187f4872c2", "text": "In the first year of the TREC Micro Blog track, our participation has focused on building from scratch an IR system based on the Whoosh IR library. Though the design of our system (CipCipPy) is pretty standard it includes three ad-hoc solutions for the track: (i) a dedicated indexing function for hashtags that automatically recognizes the distinct words composing an hashtag, (ii) expansion of tweets based on the title of any referred Web page, and (iii) a tweet ranking function that ranks tweets in results by their content quality, which is compared against a reference corpus of Reuters news. In this preliminary paper we describe all the components of our system, and the efficacy scored by our runs. The CipCipPy system is available under a GPL license.", "title": "" }, { "docid": "3e850a45249f45e95d1a7413e7b142f1", "text": "In our increasingly “data-abundant” society, remote sensing big data perform massive, high dimension and heterogeneity features, which could result in “dimension disaster” to various extent. It is worth mentioning that the past two decades have witnessed a number of dimensional reductions to weak the spatiotemporal redundancy and simplify the calculation in remote sensing information extraction, such as the linear learning methods or the manifold learning methods. However, the “crowding” and mixing when reducing dimensions of remote sensing categories could degrade the performance of existing techniques. Then in this paper, by analyzing probability distribution of pairwise distances among remote sensing datapoints, we use the 2-mixed Gaussian model(GMM) to improve the effectiveness of the theory of t-Distributed Stochastic Neighbor Embedding (t-SNE). A basic reducing dimensional model is given to test our proposed methods. The experiments show that the new probability distribution capable retains the local structure and significantly reveals differences between categories in a global structure.", "title": "" }, { "docid": "814e593fac017e5605c4992ef7b25d6d", "text": "This paper discusses the design of high power density transformer and inductor for the high frequency dual active bridge (DAB) GaN charger. Because the charger operates at 500 kHz, the inductance needed to achieve ZVS for the DAB converter is reduced to as low as 3μH. As a result, it is possible to utilize the leakage inductor as the series inductor of DAB converter. To create such amount of leakage inductance, certain space between primary and secondary winding is allocated to store the leakage flux energy. The designed transformer is above 99.2% efficiency while delivering 3.3kW. The power density of the designed transformer is 6.3 times of the lumped transformer and inductor in 50 kHz Si Charger. The detailed design procedure and loss analysis are discussed.", "title": "" }, { "docid": "f058b13088ca0f38e350cb8c8ffb0c0f", "text": "In this paper, we propose a representation learning research framework for document-level sentiment analysis. Given a document as the input, document-level sentiment analysis aims to automatically classify its sentiment/opinion (such as thumbs up or thumbs down) based on the textural information. Despite the success of feature engineering in many previous studies, the hand-coded features do not well capture the semantics of texts. In this research, we argue that learning sentiment-specific semantic representations of documents is crucial for document-level sentiment analysis. We decompose the document semantics into four cascaded constitutes: (1) word representation, (2) sentence structure, (3) sentence composition and (4) document composition. Specifically, we learn sentiment-specific word representations, which simultaneously encode the contexts of words and the sentiment supervisions of texts into the continuous representation space. According to the principle of compositionality, we learn sentiment-specific sentence structures and sentence-level composition functions to produce the representation of each sentence based on the representations of the words it contains. The semantic representations of documents are obtained through document composition, which leverages the sentiment-sensitive discourse relations and sentence representations.", "title": "" }, { "docid": "30a8e04dce07c00499f90642e05d962e", "text": "The objective of this paper is to present the gravity compensation and compliance based force control for auxiliarily easiness in manipulating robot arm. Haptical application of the safety-priority robot arm technique which interacts with people must reduce the gear ratio and design necessary algorithm which can provide auxiliarily easiness in moving the robot arm especially during the teach and learning mode. In this study, we discuss the effects of two aspects and propose a control algorithm to improve efficiency of carrying heavy item. Firstly, the gear ratio of motor is bounded so that robot can be more flexibly compliant while user take grip on it. However, robot manipulator control algorithms will suffer greater gravity downward pulling issue due to low gear ratio. To solve this problem of gravity compensation, we propose a method that based on the concept of vector projection to calculate a general solution which can construct a gravity model of multi-DOF robot arm. Furthermore, we define a virtual mode that is proposed to compensate the deficiency of inertia's physical phenomenon. Secondly, we propose an approach which we call it force counterbalance control (FCC) that not only balances external load variation in addition to robot weight itself, but also keeps the property of dexterous easiness in manipulating the multi DOF robot arm. The FCC algorithm can be applied on several applications such as carrying heavy item or being auxiliarily easinese in manipulating robot arm. Our experimental result demonstrates the benefit of the proposed effect.", "title": "" }, { "docid": "2259232b86607e964393c884340efe79", "text": "Dynamic task allocation is an essential requirement for multi-robot systems functioning in unknown dynamic environments. It allows robots to change their behavior in response to environmental changes or actions of other robots in order to improve overall system performance. Emergent coordination algorithms for task allocation that use only local sensing and no direct communication between robots are attractive because they are robust and scalable. However, a lack of formal analysis tools makes emergent coordination algorithms difficult to design. In this paper we present a mathematical model of a general dynamic task allocation mechanism. Robots using this mechanism have to choose between two types of task, and the goal is to achieve a desired task division in the absence of explicit communication and global knowledge. Robots estimate the state of the environment from repeated local observations and decide which task to choose based on these observations. We model the robots and observations as stochastic processes and study the dynamics of individual robots and the collective behavior. We analyze the effect that the number of observations and the choice of decision functions have on the performance of the system. We validate the mathematical models on a multi-foraging scenario in a multi-robot system. We find that the model’s predictions agree very closely with experimental results from sensor-based simulations.", "title": "" }, { "docid": "f02224b34170dbb8482e84cd4eb2c31e", "text": "BACKGROUND\nMany countries in middle- and low-income countries today suffer from severe staff shortages and/or maldistribution of health personnel which has been aggravated more recently by the disintegration of health systems in low-income countries and by the global policy environment. One of the most damaging effects of severely weakened and under-resourced health systems is the difficulty they face in producing, recruiting, and retaining health professionals, particularly in remote areas. Low wages, poor working conditions, lack of supervision, lack of equipment and infrastructure as well as HIV and AIDS, all contribute to the flight of health care personnel from remote areas. In this global context of accelerating inequities health service policy makers and managers are searching for ways to improve the attraction and retention of staff in remote areas. But the development of appropriate strategies first requires an understanding of the factors which influence decisions to accept and/or stay in a remote post, particularly in the context of mid and low income countries (MLICS), and which strategies to improve attraction and retention are therefore likely to be successful. It is the aim of this review article to explore the links between attraction and retention factors and strategies, with a particular focus on the organisational diversity and location of decision-making.\n\n\nMETHODS\nThis is a narrative literature review which took an iterative approach to finding relevant literature. It focused on English-language material published between 1997 and 2007. The authors conducted Pubmed searches using a range of different search terms relating to attraction and retention of staff in remote areas. Furthermore, a number of relevant journals as well as unpublished literature were systematically searched. While the initial search included articles from high- middle- and low-income countries, the review focuses on middle- and low-income countries. About 600 papers were initially assessed and 55 eventually included in the review.\n\n\nRESULTS\nThe authors argue that, although factors are multi-facetted and complex, strategies are usually not comprehensive and often limited to addressing a single or limited number of factors. They suggest that because of the complex interaction of factors impacting on attraction and retention, there is a strong argument to be made for bundles of interventions which include attention to living environments, working conditions and environments and development opportunities. They further explore the organisational location of decision-making related to retention issues and suggest that because promising strategies often lie beyond the scope of human resource directorates or ministries of health, planning and decision-making to improve retention requires multi-sectoral collaboration within and beyond government. The paper provides a simple framework for bringing the key decision-makers together to identify factors and develop multi-facetted comprehensive strategies.\n\n\nCONCLUSION\nThere are no set answers to the problem of attraction and retention. It is only through learning about what works in terms of fit between problem analysis and strategy and effective navigation through the politics of implementation that any headway will be made against the almost universal challenge of staffing health service in remote rural areas.", "title": "" }, { "docid": "a7c0bdbf05ce5d8da20a80dcc3bfaec0", "text": "Neurosurgery is a medical specialty that relies heavily on imaging. The use of computed tomography and magnetic resonance images during preoperative planning and intraoperative surgical navigation is vital to the success of the surgery and positive patient outcome. Augmented reality application in neurosurgery has the potential to revolutionize and change the way neurosurgeons plan and perform surgical procedures in the future. Augmented reality technology is currently commercially available for neurosurgery for simulation and training. However, the use of augmented reality in the clinical setting is still in its infancy. Researchers are now testing augmented reality system prototypes to determine and address the barriers and limitations of the technology before it can be widely accepted and used in the clinical setting.", "title": "" }, { "docid": "5b5e69bd93f6b809c29596a54c1565fc", "text": "Variety and veracity are two distinct characteristics of large-scale and heterogeneous data. It has been a great challenge to efficiently represent and process big data with a unified scheme. In this paper, a unified tensor model is proposed to represent the unstructured, semistructured, and structured data. With tensor extension operator, various types of data are represented as subtensors and then are merged to a unified tensor. In order to extract the core tensor which is small but contains valuable information, an incremental high order singular value decomposition (IHOSVD) method is presented. By recursively applying the incremental matrix decomposition algorithm, IHOSVD is able to update the orthogonal bases and compute the new core tensor. Analyzes in terms of time complexity, memory usage, and approximation accuracy of the proposed method are provided in this paper. A case study illustrates that approximate data reconstructed from the core set containing 18% elements can guarantee 93% accuracy in general. Theoretical analyzes and experimental results demonstrate that the proposed unified tensor model and IHOSVD method are efficient for big data representation and dimensionality reduction.", "title": "" }, { "docid": "4057543c38716486defe51a12777c5c1", "text": "In recent years, recommender systems have become an important part of various applications, supporting both customers and providers in their decision-making processes. However, these systems still must overcome limitations that reduce their performance, like recommendations' overspecialization, cold start, and difficulties when items with unequal probability distribution appear or recommendations for sets of items are asked. A novel approach, addressing the above issues through a case-based recommendation methodology, is presented here. The scope of the presented approach is to generate meaningful recommendations based on items' co-occurring patterns and to provide more insight into customers' buying habits. In contrast to current recommendation techniques that recommend items based on users' ratings or history, and to most case-based item recommenders that evaluate items' similarities, the implemented recommender uses a hierarchical model for the items and searches for similar sets of items, in order to recommend those that are most likely to satisfy a user.", "title": "" }, { "docid": "6806ff9626d68336dce539a8f2c440af", "text": "Obesity and hypertension, major risk factors for the metabolic syndrome, render individuals susceptible to an increased risk of cardiovascular complications, such as adverse cardiac remodeling and heart failure. There has been much investigation into the role that an increase in the renin-angiotensin-aldosterone system (RAAS) plays in the pathogenesis of metabolic syndrome and in particular, how aldosterone mediates left ventricular hypertrophy and increased cardiac fibrosis via its interaction with the mineralocorticoid receptor (MR). Here, we review the pertinent findings that link obesity with elevated aldosterone and the development of cardiac hypertrophy and fibrosis associated with the metabolic syndrome. These studies illustrate a complex cross-talk between adipose tissue, the heart, and the adrenal cortex. Furthermore, we discuss findings from our laboratory that suggest that cardiac hypertrophy and fibrosis in the metabolic syndrome may involve cross-talk between aldosterone and adipokines (such as adiponectin).", "title": "" } ]
scidocsrr
8d857d96d99809b32dfa150fe3aa902f
Representation and analysis of enterprise models with semantic techniques: an application to ArchiMate, e3value and business model canvas
[ { "docid": "ab9416aaed78f3b1d6706ecd59c83db8", "text": "The ArchiMate modelling language provides a coherent and a holistic view of an enterprise in terms of its products, services, business processes, actors, business units, software applications and more. Yet, ArchiMate currently lacks (1) expressivity in modelling an enterprise from a value exchange perspective, and (2) rigour and guidelines in modelling business processes that realize the transactions relevant from a value perspective. To address these issues, we show how to connect e $$^{3}$$ value, a technique for value modelling, to ArchiMate via transaction patterns from the DEMO methodology. Using ontology alignment techniques, we show a transformation between the meta models underlying e $$^{3}$$ value, DEMO and ArchiMate. Furthermore, we present a step-wise approach that shows how this model transformation is achieved and, in doing so, we also show the of such a transformation. We exemplify the transformation of DEMO and e $$^{3}$$ value into ArchiMate by means of a case study in the insurance industry. As a proof of concept, we present a software tool supporting our transformation approach. Finally, we discuss the functionalities and limitations of our approach; thereby, we analyze its and practical applicability.", "title": "" } ]
[ { "docid": "53fca78f9ecbfe0a88eb1df8596976e1", "text": "As there has been an explosive increase in wireless data traffic, mmw communication has become one of the most attractive techniques in the 5G mobile communications systems. Although mmw communication systems have been successfully applied to indoor scenarios, various external factors in an outdoor environment limit the applications of mobile communication systems working at the mmw bands. In this article, we discuss the issues involved in the design of antenna array architecture for future 5G mmw systems, in which the antenna elements can be deployed in the shapes of a cross, circle, or hexagon, in addition to the conventional rectangle. The simulation results indicate that while there always exists a non-trivial gain fluctuation in other regular antenna arrays, the circular antenna array has a flat gain in the main lobe of the radiation pattern with varying angles. This makes the circular antenna array more robust to angle variations that frequently occur due to antenna vibration in an outdoor environment. In addition, in order to guarantee effective coverage of mmw communication systems, possible solutions such as distributed antenna systems and cooperative multi-hop relaying are discussed, together with the design of mmw antenna arrays. Furthermore, other challenges for the implementation of mmw cellular networks, for example, blockage, communication security, hardware development, and so on, are discussed, as are potential solutions.", "title": "" }, { "docid": "b5fea029d64084089de8e17ae9debffc", "text": "While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.", "title": "" }, { "docid": "3028de6940fb7a5af5320c506946edfc", "text": "Metaphor is ubiquitous in text, even in highly technical text. Correct inference about textual entailment requires computers to distinguish the literal and metaphorical senses of a word. Past work has treated this problem as a classical word sense disambiguation task. In this paper, we take a new approach, based on research in cognitive linguistics that views metaphor as a method for transferring knowledge from a familiar, well-understood, or concrete domain to an unfamiliar, less understood, or more abstract domain. This view leads to the hypothesis that metaphorical word usage is correlated with the degree of abstractness of the word’s context. We introduce an algorithm that uses this hypothesis to classify a word sense in a given context as either literal (denotative) or metaphorical (connotative). We evaluate this algorithm with a set of adjectivenoun phrases (e.g., in dark comedy , the adjective dark is used metaphorically; in dark hair, it is used literally) and with the TroFi (Trope Finder) Example Base of literal and nonliteral usage for fifty verbs. We achieve state-of-theart performance on both datasets.", "title": "" }, { "docid": "1c01d2d8d9a11fa71b811a5afbfc0250", "text": "This paper describes an interactive tour-guide robot, whic h was successfully exhibited in a Smithsonian museum. During its two weeks of operation, the robot interacted with more than 50,000 people, traversing more than 44km. Our approach specifically addresses issues such as safe navigation in unmodified and dynamic environments, and shortterm human-robot interaction.", "title": "" }, { "docid": "79c5085cb9f85dbcd52637a71234c199", "text": "Abstract: In this paper, a three-phase six-switch standard boost rectifier with unity-power-factor-correction is investigated. A general equation is derived that relates input phase voltage and duty ratios of switches in continuous conduction mode. Based on one of solutions and using One-Cycle Control, a Unified Constant-frequency Integration (UCI) controller for powerfactor-correction (PFC) is proposed. For the standard bridge boost rectifier, unity-power-factor and low total-harmonicdistortion (THD) can be realized in all three phases with a simple circuit that is composed of one integrator with reset along with several flips-flops, comparators, and some logic and linear components. It does not require multipliers and threephase voltage sensors, which are used in many other control approaches. In addition, it employs constant switching frequency modulation that is desirable for industrial applications. The proposed control approach is simple and reliable. Theoretical analysis is verified by simulation and experimental results.", "title": "" }, { "docid": "e022bcb002e2c851e697972a49c3e417", "text": "A polymer membrane-coated palladium (Pd) nanoparticle (NP)/single-layer graphene (SLG) hybrid sensor was fabricated for highly sensitive hydrogen gas (H2) sensing with gas selectivity. Pd NPs were deposited on SLG via the galvanic displacement reaction between graphene-buffered copper (Cu) and Pd ion. During the galvanic displacement reaction, graphene was used as a buffer layer, which transports electrons from Cu for Pd to nucleate on the SLG surface. The deposited Pd NPs on the SLG surface were well-distributed with high uniformity and low defects. The Pd NP/SLG hybrid was then coated with polymer membrane layer for the selective filtration of H2. Because of the selective H2 filtration effect of the polymer membrane layer, the sensor had no responses to methane, carbon monoxide, or nitrogen dioxide gas. On the contrary, the PMMA/Pd NP/SLG hybrid sensor exhibited a good response to exposure to 2% H2: on average, 66.37% response within 1.81 min and recovery within 5.52 min. In addition, reliable and repeatable sensing behaviors were obtained when the sensor was exposed to different H2 concentrations ranging from 0.025 to 2%.", "title": "" }, { "docid": "d355014cd6d5979307b6cdb49734db3e", "text": "It is of great interest in exploiting texture information for classification of hyperspectral imagery (HSI) at high spatial resolution. In this paper, a classification paradigm to exploit rich texture information of HSI is proposed. The proposed framework employs local binary patterns (LBPs) to extract local image features, such as edges, corners, and spots. Two levels of fusion (i.e., feature-level fusion and decision-level fusion) are applied to the extracted LBP features along with global Gabor features and original spectral features, where feature-level fusion involves concatenation of multiple features before the pattern classification process while decision-level fusion performs on probability outputs of each individual classification pipeline and soft-decision fusion rule is adopted to merge results from the classifier ensemble. Moreover, the efficient extreme learning machine with a very simple structure is employed as the classifier. Experimental results on several HSI data sets demonstrate that the proposed framework is superior to some traditional alternatives.", "title": "" }, { "docid": "ab8599cbe4b906cea6afab663cbe2caf", "text": "Real-time ETL and data warehouse multidimensional modeling (DMM) of business operational data has become an important research issue in the area of real-time data warehousing (RTDW). In this study, some of the recently proposed real-time ETL technologies from the perspectives of data volumes, frequency, latency, and mode have been discussed. In addition, we highlight several advantages of using semi-structured DMM (i.e. XML) in RTDW instead of traditional structured DMM (i.e., relational). We compare the two DMMs on the basis of four characteristics: heterogeneous data integration, types of measures supported, aggregate query processing, and incremental maintenance. We implemented the RTDW framework for an example telecommunication organization. Our experimental analysis shows that if the delay comes from the incremental maintenance of DMM, no ETL technology (full-reloading or incremental-loading) can help in real-time business intelligence.", "title": "" }, { "docid": "d755340bc483a392b48f8e714354291a", "text": "Kineograph is a distributed system that takes a stream of incoming data to construct a continuously changing graph, which captures the relationships that exist in the data feed. As a computing platform, Kineograph further supports graph-mining algorithms to extract timely insights from the fast-changing graph structure. To accommodate graph-mining algorithms that assume a static underlying graph, Kineograph creates a series of consistent snapshots, using a novel and efficient epoch commit protocol. To keep up with continuous updates on the graph, Kineograph includes an incremental graph-computation engine. We have developed three applications on top of Kineograph to analyze Twitter data: user ranking, approximate shortest paths, and controversial topic detection. For these applications, Kineograph takes a live Twitter data feed and maintains a graph of edges between all users and hashtags. Our evaluation shows that with 40 machines processing 100K tweets per second, Kineograph is able to continuously compute global properties, such as user ranks, with less than 2.5-minute timeliness guarantees. This rate of traffic is more than 10 times the reported peak rate of Twitter as of October 2011.", "title": "" }, { "docid": "9f81e82aa60f06f3eac37d9bce3c9707", "text": "Active contours are image segmentation methods that minimize the total energy of the contour to be segmented. Among the active contour methods, the radial methods have lower computational complexity and can be applied in real time. This work aims to present a new radial active contour technique, called pSnakes, using the 1D Hilbert transform as external energy. The pSnakes method is based on the fact that the beams in ultrasound equipment diverge from a single point of the probe, thus enabling the use of polar coordinates in the segmentation. The control points or nodes of the active contour are obtained in pairs and are called twin nodes. The internal energies as well as the external one, Hilbertian energy, are redefined. The results showed that pSnakes can be used in image segmentation of short-axis echocardiogram images and that they were effective in image segmentation of the left ventricle. The echo-cardiologist's golden standard showed that the pSnakes was the best method when compared with other methods. The main contributions of this work are the use of pSnakes and Hilbertian energy, as the external energy, in image segmentation. The Hilbertian energy is calculated by the 1D Hilbert transform. Compared with traditional methods, the pSnakes method is more suitable for ultrasound images because it is not affected by variations in image contrast, such as noise. The experimental results obtained by the left ventricle segmentation of echocardiographic images demonstrated the advantages of the proposed model. The results presented in this paper are justified due to an improved performance of the Hilbert energy in the presence of speckle noise.", "title": "" }, { "docid": "92c91a8e9e5eec86f36d790dec8020e7", "text": "Aspect-based opinion mining, which aims to extract aspects and their corresponding ratings from customers reviews, provides very useful information for customers to make purchase decisions. In the past few years several probabilistic graphical models have been proposed to address this problem, most of them based on Latent Dirichlet Allocation (LDA). While these models have a lot in common, there are some characteristics that distinguish them from each other. These fundamental differences correspond to major decisions that have been made in the design of the LDA models. While research papers typically claim that a new model outperforms the existing ones, there is normally no \"one-size-fits-all\" model. In this paper, we present a set of design guidelines for aspect-based opinion mining by discussing a series of increasingly sophisticated LDA models. We argue that these models represent the essence of the major published methods and allow us to distinguish the impact of various design decisions. We conduct extensive experiments on a very large real life dataset from Epinions.com (500K reviews) and compare the performance of different models in terms of the likelihood of the held-out test set and in terms of the accuracy of aspect identification and rating prediction.", "title": "" }, { "docid": "97f2f0dd427c5f18dae178bc2fd620ba", "text": "NOTICE The contents of this report reflect the views of the author, who is responsible for the facts and accuracy of the data presented herein. The contents do not necessarily reflect policy of the Department of Transportation. This report does not constitute a standard, specification, or regulation. Abstract This report summarizes the historical development of the resistance factors developed for the geotechnical foundation design sections of the AASHTO LRFD Bridge Design Specifications, and recommends how to specifically implement recent developments in resistance factors for geotechnical foundation design. In addition, recommendations regarding the load factor for downdrag loads, based on statistical analysis of available load test data and reliability theory, are provided. The scope of this report is limited to shallow and deep foundation geotechnical design at the strength limit state. 17. Forward With the advent of the AASHTO Load and Resistance Factor (LRFD) Bridge Design Specifications in 1992, there has been considerable focus on the geotechnical aspects of those specifications, since most geotechnical engineers are unfamiliar with LRFD concepts. This is especially true regarding the quantification of the level of safety needed for design. Up to the time of writing of this report, the geotechnical profession has typically used safety factors within an allowable stress design (ASD) framework (also termed working stress design, or WSD). For those agencies that use Load Factor Design (LFD), the safety factors for the foundation design are used in combination with factored loads in accordance with the AASHTO Standard Specifications for Highway Bridges (2002). The adaptation of geotechnical design and the associated safety factors to what would become the first edition of the AASHTO LRFD Bridge Design Specifications began in earnest with the publication of the results of NCHRP Project 24-4 as NCHRP Report 343 (Barker, et al., 1991). The details of the calibrations they conducted are provided in an unpublished Appendix to that report (Appendix A). This is the primary source of resistance factors for foundation design as currently published in AASHTO (2004). Since that report was published, changes have occurred in the specifications regarding load factors and design methodology that have required re-evaluation of the resistance factors. Furthermore, new studies have been or are being conducted that are yet to be implemented in the LRFD specifications. In 2002, the AASHTO Bridge Subcommittee initiated an effort, with the help of the Federal Highway Administration (FHWA), to rewrite the foundation design sections of the AASHTO …", "title": "" }, { "docid": "9006f257d25a9ba4dd2ae07eccccb0c2", "text": "Using memoization and various other optimization techniques, the number of dissections of the n × n square into n polyominoes of size n is computed for n ≤ 8. On this task our method outperforms Donald Knuth’s Algorithm X with Dancing Links. The number of jigsaw sudoku puzzle solutions is computed for n ≤ 7. For every jigsaw sudoku puzzle polyomino cover with n ≤ 6 the size of its smallest critical sets is determined. Furthermore it is shown that for every n ≥ 4 there exists a polyomino cover that does not allow for any sudoku puzzle solution. We give a closed formula for the number of possible ways to fill the border of an n × n square with numbers while obeying Latin square constraints. We define a cannibal as a nonempty hyperpolyomino that disconnects its exterior from its interior, where the interior is exactly the size of the hyperpolyomino itself, and we present the smallest found cannibals in two and three dimensions.", "title": "" }, { "docid": "a44e95fe672a4468b42fe881cd1697fd", "text": "In this paper, we present a maximum power point tracker and estimator for a PV system to estimate the point of maximum power, to track this point and force it to reach this point in finite time and to stay there for all future time in order to provide the maximum power available to the load. The load will be composed of a battery bank. This is obtained by controlling the duty cycle of a DC-DC converter using sliding mode control. The sliding mode controller is given the estimated maximum power point as a reference for it to track that point and force the PV system to operate in this point. This method has the advantage that it will guarantee the maximum output power possible by the array configuration while considering the dynamic parameters temperature and solar irradiance and delivering more power to charge the battery. The procedure of designing, simulating and results are presented in this paper.", "title": "" }, { "docid": "b3b050c35a1517dc52351cd917d0665a", "text": "The amount of information shared via social media is rapidly increasing amid growing concerns over online privacy. This study investigates the effect of controversiality and social endorsement of media content on sharing behavior when choosing between sharing publicly or anonymously. Anonymous sharing is found to be a popular choice (59% of shares), especially for controversial content which is 3.2x more likely to be shard anonymously. Social endorsement was not found to affect sharing behavior, except for sports-related content. Implications for social media interface design are dis-", "title": "" }, { "docid": "7442f94af36f6d317291da814e7f3676", "text": "Muscles are required to perform or absorb mechanical work under different conditions. However the ability of a muscle to do this depends on the interaction between its contractile components and its elastic components. In the present study we have used ultrasound to examine the length changes of the gastrocnemius medialis muscle fascicle along with those of the elastic Achilles tendon during locomotion under different incline conditions. Six male participants walked (at 5 km h(-1)) on a treadmill at grades of -10%, 0% and 10% and ran (at 10 km h(-1)) at grades of 0% and 10%, whilst simultaneous ultrasound, electromyography and kinematics were recorded. In both walking and running, force was developed isometrically; however, increases in incline increased the muscle fascicle length at which force was developed. Force was developed at shorter muscle lengths for running when compared to walking. Substantial levels of Achilles tendon strain were recorded in both walking and running conditions, which allowed the muscle fascicles to act at speeds more favourable for power production. In all conditions, positive work was performed by the muscle. The measurements suggest that there is very little change in the function of the muscle fascicles at different slopes or speeds, despite changes in the required external work. This may be a consequence of the role of this biarticular muscle or of the load sharing between the other muscles of the triceps surae.", "title": "" }, { "docid": "67bfb3126bca928568fdf1eb264b4722", "text": "The increment of computer technology use and the continued growth of companies have enabled most financial transactions to be performed through the electronic commerce systems, such as using the Credit card system, Telecommunication system, Healthcare Insurance system, etc. Unfortunately, these systems are used by both legitimate users and fraudsters. In addition, fraudsters utilized different approaches to breach the electronic commerce systems. Fraud prevention systems (FPSs) are insufficient to provide adequate security to the electronic commerce systems. However, the collaboration of FDSs with FPSs might be effective to secure electronic commerce systems. Nevertheless, there are issues and challenges that hinder the performance of FDSs, such as Concept Drift, Supports Real Time Detection, Skewed Distribution, Large Amount of Data etc. This survey paper aims to provide a systematic and comprehensive overview of these issues and challenges that obstruct the performance of FDSs. We have selected five electronic commerce systems; which are Credit card, Telecommunication, Healthcare Insurance, Automobile Insurance and Online auction. The prevalent fraud types in those E-commerce systems are introduced closely. Further, state-of-the-art FDSs approaches in selected E-commerce systems are systematically introduced. Then a brief discussion on potential research trends in the near future and conclusion are presented.", "title": "" }, { "docid": "c6f173f75917ee0632a934103ca7566c", "text": "Mersenne Twister (MT) is a widely-used fast pseudorandom number generator (PRNG) with a long period of 2 − 1, designed 10 years ago based on 32-bit operations. In this decade, CPUs for personal computers have acquired new features, such as Single Instruction Multiple Data (SIMD) operations (i.e., 128bit operations) and multi-stage pipelines. Here we propose a 128-bit based PRNG, named SIMD-oriented Fast Mersenne Twister (SFMT), which is analogous to MT but making full use of these features. Its recursion fits pipeline processing better than MT, and it is roughly twice as fast as optimised MT using SIMD operations. Moreover, the dimension of equidistribution of SFMT is better than MT. We also introduce a block-generation function, which fills an array of 32-bit integers in one call. It speeds up the generation by a factor of two. A speed comparison with other modern generators, such as multiplicative recursive generators, shows an advantage of SFMT. The implemented C-codes are downloadable from http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html.", "title": "" }, { "docid": "35712c761dfabeb20904976c8b1a917c", "text": "Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.", "title": "" } ]
scidocsrr
4a8d24f409dd45bf26892ddaaace6818
Using imagination to understand the neural basis of episodic memory.
[ { "docid": "562df031fad2ed1583c1def457d74392", "text": "Social interaction is a cornerstone of human life, yet the neural mechanisms underlying social cognition are poorly understood. Recently, research that integrates approaches from neuroscience and social psychology has begun to shed light on these processes, and converging evidence from neuroimaging studies suggests a unique role for the medial frontal cortex. We review the emerging literature that relates social cognition to the medial frontal cortex and, on the basis of anatomical and functional characteristics of this brain region, propose a theoretical model of medial frontal cortical function relevant to different aspects of social cognitive processing.", "title": "" } ]
[ { "docid": "d8480f49edcc9034511698d5810ad839", "text": "Defect prediction on new projects or projects with limited historical data is an interesting problem in defect prediction studies. This is largely because it is difficult to collect defect information to label a dataset for training a prediction model. Cross-project defect prediction (CPDP) has tried to solve this problem by reusing prediction models built by other projects that have enough historical data. However, CPDP does not always build a strong prediction model because of the different distributions among datasets. Approaches for defect prediction on unlabeled datasets have also tried to address the problem by adopting unsupervised learning but it has one major limitation, the necessity for manual effort. In this study, we propose novel approaches, CLA and CLAMI, that show the potential for defect prediction on unlabeled datasets in an automated manner without need for manual effort. The key idea of the CLA and CLAMI approaches is to label an unlabeled dataset by using the magnitude of metric values. In our empirical study on seven open-source projects, the CLAMI approach led to the promising prediction performances, 0.636 and 0.723 in average f-measure and AUC, that are comparable to those of defect prediction based on supervised learning.", "title": "" }, { "docid": "79a2cc561cd449d8abb51c162eb8933d", "text": "We introduce a new test of how well language models capture meaning in children’s books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lowerfrequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.", "title": "" }, { "docid": "b59a2c49364f3e95a2c030d800d5f9ce", "text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.", "title": "" }, { "docid": "8d0b7e0315d0e8a7eba9876d7c08be69", "text": "We report on a case of conjoined twinning (CT) consistent with fusion of two embryos followed by resorption of the cranial half of one of them, resulting in a normal male baby with the lower half of a male parasitic twin fused to his chest. Fluorescent in situ hybridization (FISH) studies suggested that the parasitic twin was male, and DNA typing studies demonstrated dizygosity. Although incomplete fission is the usual explanation for conjoined twins, the unusual perpendicular orientation of the parasite to the autosite supports a mechanism observed in mares in which early fusion of two embryos is followed by resorption due to compromised embryonic polarity.", "title": "" }, { "docid": "94a6106cac2ecd3362c81fc6fd93df28", "text": "We present a simple encoding for unlabeled noncrossing graphs and show how its latent counterpart helps us to represent several families of directed and undirected graphs used in syntactic and semantic parsing of natural language as contextfree languages. The families are separated purely on the basis of forbidden patterns in latent encoding, eliminating the need to differentiate the families of non-crossing graphs in inference algorithms: one algorithm works for all when the search space can be controlled in parser input.", "title": "" }, { "docid": "d0372369256f0661eadddfcc27c992d6", "text": "Massive Open Online Courses (MOOCs) are a disruptive trend in education. Several initiatives have emerged during the last months to give support to MOOCs, and many educators have started offering courses as MOOCs in different areas and disciplines. However, designing a MOOC is not an easy task. Educators need to face not only pedagogical issues, but also other issues of logistical, technological and financial nature, as well as how these issues relate and constrain each other. Currently, little guidance is available for educators to address the design of MOOCs from scratch keeping a balance between all these issues. This paper proposes a conceptual framework for supporting educators in the description and design of MOOCs called the MOOC Canvas. The MOOC Canvas defines eleven interrelated issues that are addressed through a set of questions, offering a visual and understandable guidance for educators during the MOOC design process. As a practical usage example, this paper shows how the MOOC Canvas captures the description and design of a real 9-week MOOC. An analysis of the different elements of the course shed some light on the usage of the MOOC Canvas as a mechanism to address the description and design of MOOCs.", "title": "" }, { "docid": "a1c917d7a685154060ddd67d631ea061", "text": "In this paper, for finding the place of plate, a real time and fast method is expressed. In our suggested method, the image is taken to HSV color space; then, it is broken into blocks in a stable size. In frequent process, each block, in special pattern is probed. With the appearance of pattern, its neighboring blocks according to geometry of plate as a candidate are considered and increase blocks, are omitted. This operation is done for all of the uncontrolled blocks of images. First, all of the probable candidates are exploited; then, the place of plate is obtained among exploited candidates as density and geometry rate. In probing every block, only its lip pixel is studied which consists 23.44% of block area. From the features of suggestive method, we can mention the lack of use of expensive operation in image process and its low dynamic that it increases image process speed. This method is examined on the group of picture in background, distance and point of view. The rate of exploited plate reached at 99.33% and character recognition rate achieved 97%.", "title": "" }, { "docid": "1b8550cdbe9a01742fdb34b7516cfb83", "text": "Blood pressure (BP) is one of the important vital signs that need to be monitored for personal healthcare. Arterial blood pressure (BP) was estimated from pulse transit time (PTT) and PPG waveform. PTT is a time interval between an R-wave of electrocardiography (ECG) and a photoplethysmography (PPG) signal. This method does not require an aircuff and only a minimal inconvenience of attaching electrodes and LED/photo detector sensors on a subject. PTT computed between the ECG R-wave and the maximum first derivative PPG was strongly correlated with systolic blood pressure (SBP) (R = −0.712) compared with other PTT values, and the diastolic time proved to be appropriate for estimation diastolic blood pressure (DBP) (R = −0.764). The percent errors of SBP using the individual regression line (4–11%) were lower than those using the regression line obtained from all five subjects (9–14%). On the other hand, the DBP estimation did not show much difference between the individual regression (4–10%) and total regression line (6–10%). Our developed device had a total size of 7 × 13.5 cm and was operated by single 3-V battery. Biosignals can be measured for 72 h continuously without external interruptions. Through a serial network communication, an external personal computer can monitor measured waveforms in real time. Our proposed method can be used for non-constrained, thus continuous BP monitoring for the purpose of personal healthcare.", "title": "" }, { "docid": "8d3f65dbeba6c158126ae9d82c886687", "text": "Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes have rather limited explanatory power. Further, the residuals from this regression are highly cross-correlated, and principal components analysis implies they are mostly driven by a single common factor. Although we consider several macroeconomic and financial variables as candidate proxies, we cannot explain this common systematic component. Our results suggest that monthly credit spread changes are principally driven by local supply0 demand shocks that are independent of both credit-risk factors and standard proxies for liquidity. THE RELATION BETWEEN STOCK AND BOND RETURNS has been widely studied at the aggregate level ~see, e.g., Keim and Stambaugh ~1986!, Fama and French ~1989, 1993!, Campbell and Ammer ~1993!!. Recently, a few studies have investigated that relation at both the individual firm level ~see, e.g., Kwan ~1996!! and portfolio level ~see, e.g., Blume, Keim, and Patel ~1991!, Cornell and Green ~1991!!. These studies focus on corporate bond returns, or yield changes. The main conclusions of these papers are: ~1! high-grade bonds behave like Treasury bonds, and ~2! low-grade bonds are more sensitive to stock returns. The implications of these studies may be limited in many situations of interest, however. For example, hedge funds often take highly levered positions in corporate bonds while hedging away interest rate risk by shorting treasuries. As a consequence, their portfolios become extremely sensitive to changes in credit spreads rather than changes in bond yields. The distinc* Collin-Dufresne is at Carnegie Mellon University. Goldstein is at Washington University in St. Louis. Martin is at Arizona State University. A significant portion of this paper was written while Goldstein and Martin were at The Ohio State University. We thank Rui Albuquerque, Gurdip Bakshi, Greg Bauer, Dave Brown, Francesca Carrieri, Peter Christoffersen, Susan Christoffersen, Greg Duffee, Darrell Duffie, Vihang Errunza, Gifford Fong, Mike Gallmeyer, Laurent Gauthier, Rick Green, John Griffin, Jean Helwege, Kris Jacobs, Chris Jones, Andrew Karolyi, Dilip Madan, David Mauer, Erwan Morellec, Federico Nardari, N.R. Prabhala, Tony Sanders, Sergei Sarkissian, Bill Schwert, Ken Singleton, Chester Spatt, René Stulz ~the editor!, Suresh Sundaresan, Haluk Unal, Karen Wruck, and an anonymous referee for helpful comments. We thank Ahsan Aijaz, John Puleo, and Laura Tuttle for research assistance. We are also grateful to seminar participants at Arizona State University, University of Maryland, McGill University, The Ohio State University, University of Rochester, and Southern Methodist University. THE JOURNAL OF FINANCE • VOL. LVI, NO. 6 • DEC. 2001", "title": "" }, { "docid": "07a048f6d960a3e11433bd10a4d40836", "text": "This paper presents a survey of topological spatial logics, taking as its point of departure the interpretation of the modal logic S4 due to McKinsey and Tarski. We consider the effect of extending this logic with the means to represent topological connectedness, focusing principally on the issue of computational complexity. In particular, we draw attention to the special problems which arise when the logics are interpreted not over arbitrary topological spaces, but over (low-dimensional) Euclidean spaces.", "title": "" }, { "docid": "d2ee6e2e3c7e851e75558ab69d159e08", "text": "the later stages of the development life cycle versus during production (Brooks 1995). Therefore, testing is one of the most critical and time-consuming phases of the software development life cycle, which accounts for 50 percent of the total cost of development (Brooks 1995). The testing phase should be planned carefully in order to save time and effort while detecting as many defects as possible. Different verification, validation, and testing strategies have been proposed so far to optimize the time and effort utilized during the testing phase: code reviews (Adrian, Branstad, and Cherniavsky 1982; Shull et al. 2002), inspections (Fagan 1976), and automated tools (Menzies, Greenwald, and Frank 2007; Nagappan, Ball, and Murphy 2006; Ostrand, Weyuker, and Bell 2005). Defect predictors improve the efficiency of the testing phase in addition to helping developers assess the quality and defectproneness of their software product (Fenton and Neil 1999). They also help managers in allocating resources. Most defect prediction models combine well-known methodologies and algorithms such as statistical techniques (Nagappan, Ball, and Murphy 2006; Ostrand, Weyuker, and Bell 2005; Zimmermann et al. 2004) and machine learning (Munson and Khoshgoftaar 1992; Fenton and Neil 1999; Lessmann et al. 2008; Moser, Pedrycz, and Succi 2008). They require historical data in terms of software metrics and actual defect rates, and combine these metrics and defect information as training data to learn which modules seem to be defect prone. Based on the knowledge from training data and software metrics acquired from a recently completed project, such tools can estimate defect-prone modules of that project. IAAI Articles", "title": "" }, { "docid": "30dfcf624badf766c3c7070548a47af4", "text": "The primary purpose of this paper is to stimulate discussion about a research agenda for a new interdisciplinary field. This field-the study of coordination-draws upon a variety of different disciplines including computer science, organization theory, management science, economics, and psychology. Work in this new area will include developing a body of scientific theory, which we will call \"coordination theory,\" about how the activities of separate actors can be coordinated. One important use for coordination theory will be in developing and using computer and communication systems to help people coordinate their activities in new ways. We will call these systems \"coordination technology.\" Rationale There are four reasons why work in this area is timely: (1) In recent years, large numbers of people have acquired direct access to computers. These computers are now beginning to be connected to each other. Therefore, we now have, for the first time, an opportunity for vastly larger numbers of people to use computing and communications capabilities to help coordinate their work. For example, specialized new software has been developed to (a) support multiple authors working together on the same document, (b) help people display and manipulate information more effectively in face-to-face meetings, and (c) help people intelligently route and process electronic messages. It already appears likely that there will be commercially successful products of this new type (often called \"computer supported cooperative work\" or \"groupware\"), and to some observers these applications herald a paradigm shift in computer usage as significant as the earlier shifts to time-sharing and personal computing. It is less clear whether the continuing development of new computer applications in this area will depend solely on the intuitions of successful designers or whether it will also be guided by a coherent underlying theory of how people coordinate their activities now and how they might do so differently with computer support. (2) In the long run, the dramatic improvements in the costs and capabilities of information technologies are changing-by orders of magnitude-the constraints on how certain kinds of communication and coordination can occur. At the same time, there is a pervasive feeling in American business that the pace of change is accelerating and that we need to create more flexible and adaptive organizations. Together, these changes may soon lead us across a threshhold where entirely new ways of organizing human activities become desirable. For 2 example, new capabilities for communicating information faster, less expensively, and …", "title": "" }, { "docid": "1b4d20f4f05afdb65133ff0940545ac7", "text": "Kovesdy CP, Quarles LD. FGF23 from bench to bedside. Am J Physiol Renal Physiol 310: F1168–F1174, 2016. First published February 10, 2016; doi:10.1152/ajprenal.00606.2015.—There is a strong association between elevated circulating fibroblast growth factor-23 (FGF23) levels and adverse outcomes in patients with chronic kidney disease (CKD) of all stages. Initially discovered as a regulator of phosphate and vitamin D homeostasis, FGF23 has now been implicated in several pathophysiological mechanisms that may negatively impact the cardiovascular and renal systems. FGF23 is purported to have direct (off-target) effects in the myocardium, as well as canonical effects on FGF receptor/ -klotho receptor complexes in the kidney to activate the renin-angiotensin-aldosterone system, modulate soluble -klotho levels, and increase sodium retention, to cause left ventricular hypertrophy (LVH). Conversely, FGF23 could be an innocent bystander produced in response to chronic inflammation or other processes associated with CKD that cause LVH and adverse cardiovascular outcomes. Further exploration of these complex mechanisms is needed before modulation of FGF23 can become a legitimate clinical target in CKD.", "title": "" }, { "docid": "065620d1b22634eebf94bb0b33bc8598", "text": "An increasing amount of information is being collected on the ecological and socio-economic value of goods and services provided by natural and semi-natural ecosystems. However, much of this information appears scattered throughout a disciplinary academic literature, unpublished government agency reports, and across the World Wide Web. In addition, data on ecosystem goods and services often appears at incompatible scales of analysis and is classified differently by different authors. In order to make comparative ecological economic analysis possible, a standardized framework for the comprehensive assessment of ecosystem functions, goods and services is needed. In response to this challenge, this paper presents a conceptual framework and typology for describing, classifying and valuing ecosystem functions, goods and services in a clear and consistent manner. In the following analysis, a classification is given for the fullest possible range of 23 ecosystem functions that provide a much larger number of goods and services. In the second part of the paper, a checklist and matrix is provided, linking these ecosystem functions to the main ecological, socio–cultural and economic valuation methods. © 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "28194c367e584aca063817affe24fb4b", "text": "The present study was carried out to characterize the bioactive constituents present in different leaf extracts of Vitex altissima L. using UV-VIS, FTIR and GC-MS. The crude extracts were scanned in the wavelength ranging from 200-1100 nm by using Perkin Elmer Spectrophotometer and the characteristic peaks were detected. For GC-MS analysis, 10 g sample is extracted with 30 ml ethanol, filtered in ash less filter paper with 2 g sodium sulphate and the extract is concentrated to 1 ml by bubbling nitrogen into the solution. The compound detection employed the NIST Ver. 2.0 Year 2005 library. The biological activities are based on Dr. Duke’s Phytochemical and Ethnobotanical Databases by Dr. Jim Duke of the Agricultural Research Service/USDA. The UV-VIS profile showed different peaks ranging from 400-700 nm with different absorption respectively. The FTIR spectrum confirmed the presence of alcohols, phenols, alkanes, alkynes, alkyl halides, aldehydes, carboxylic acids, aromatics, nitro compounds and amines in different extracts. The results of the GC-MS analysis provide different peaks determining the presence of 21 phytochemical compounds with different therapeutic activities. The major phytoconstituents were n-Hexadecanoic acid (23.74%), 9, 12-Octadecadienoic acid [Z, Z] (23.41%) and Squalene (14.74%). Hence, this study offers a base of using V. altissima as herbal alternative for the synthesis of antimicrobial agents.", "title": "" }, { "docid": "b25b7100c035ad2953fb43087ede1625", "text": "In this paper, a novel 10W substrate integrated waveguide (SIW) high power amplifier (HPA) designed with SIW matching network (MN) is presented. The SIW MN is connected with microstrip line using microstrip-to-SIW transition. An inductive metallized post in SIW is employed to realize impedance matching. At the fundamental frequency of 2.14 GHz, the impedance matching is realized by moving the position of the inductive metallized post in the SIW. Both the input and output MNs are designed with the proposed SIW-based MN concept. One SIW-based 10W HPA using GaN HEMT at 2.14 GHz is designed, fabricated, and measured. The proposed SIW-based HPA can be easily connected with any microstrip circuit with microstrip-to-SIW transition. Measured results show that the maximum power added efficiency (PAE) is 65.9 % with 39.8 dBm output power and the maximum gain is 20.1 dB with 30.9 dBm output power at 2.18 GHz. The size of the proposed SIW-based HPA is comparable with other microstrip-based PAs designed at the operating frequency.", "title": "" }, { "docid": "1b91c76a4ba6e5721c5c1d30209ae8bc", "text": "We study the problem of conditional generative modeling based on designated semantics or structures. Existing models that build conditional generators either require massive labeled instances as supervision or are unable to accurately control the semantics of generated samples. We propose structured generative adversarial networks (SGANs) for semi-supervised conditional generative modeling. SGAN assumes the data x is generated conditioned on two independent latent variables: y that encodes the designated semantics, and z that contains other factors of variation. To ensure disentangled semantics in y and z, SGAN builds two collaborative games in the hidden space to minimize the reconstruction error of y and z, respectively. Training SGAN also involves solving two adversarial games that have their equilibrium concentrating at the true joint data distributions p(x, z) and p(x,y), avoiding distributing the probability mass diffusely over data space that MLE-based methods may suffer. We assess SGAN by evaluating its trained networks, and its performance on downstream tasks. We show that SGAN delivers a highly controllable generator, and disentangled representations; it also establishes start-of-the-art results across multiple datasets when applied for semi-supervised image classification (1.27%, 5.73%, 17.26% error rates on MNIST, SVHN and CIFAR-10 using 50, 1000 and 4000 labels, respectively). Benefiting from the separate modeling of y and z, SGAN can generate images with high visual quality and strictly following the designated semantic, and can be extended to a wide spectrum of applications, such as style transfer.", "title": "" }, { "docid": "f78779d6c2937560c68b7a3513c4730f", "text": "We report on the methods used in our recent DeepEnsembleCoco submission to the PASCAL VOC 2012 challenge, which achieves state-of-theart performance on the object detection task. Our method is a variant of the R-CNN model proposed by Girshick et al. [4] with two key improvements to training and evaluation. First, our method constructs an ensemble of deep CNN models with different architectures that are complementary to each other. Second, we augment the PASCAL VOC training set with images from the Microsoft COCO dataset to significantly enlarge the amount training data. Importantly, we select a subset of the Microsoft COCO images to be consistent with the PASCAL VOC task. Results on the PASCAL VOC evaluation server show that our proposed method outperform all previous methods on the PASCAL VOC 2012 detection task at time of submission.", "title": "" }, { "docid": "985209b72349fea5ab5c989bb0cbf498", "text": "Soft Pneumatic Actuator skin (SPA-skin) is a novel concept of ultra-thin (<; 1 mm) sensor embedded actuators with distributed actuation points that could cover soft bodies. This highly customizable and flexible SPA-skin is ideal for providing proprioceptive sensing by covering pre-existing structures and robots bodies. Having few limitation of the surface quality, dynamics, or shape, these mechanical attributes allow potential applications in autonomous flexible braille, active surface pattern reconfiguration, distributed actuation and sensing for tactile interface improvements. In this paper, the authors present a proof-of-concept SPA-skin. The mechanical parameters, design criteria, sensor selection, and actuator construction process are illustrated. Two control schemes, actuation mode and force sensing mode, are also demonstrated with the latest prototype.", "title": "" }, { "docid": "0de84142c51e72dd907804ef518195d8", "text": "Markov chain Monte Carlo and sequential Monte Carlo methods have emerged as the two main tools to sample from high dimensional probability distributions.Although asymptotic convergence of Markov chain Monte Carlo algorithms is ensured under weak assumptions, the performance of these algorithms is unreliable when the proposal distributions that are used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods. This allows us not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so. We demonstrate these algorithms on a non-linear state space model and a Lévy-driven stochastic volatility model.", "title": "" } ]
scidocsrr
c0b426b99bd7643159a64cfc33879418
What Makes Interruptions Disruptive?: A Process-Model Account of the Effects of the Problem State Bottleneck on Task Interruption and Resumption
[ { "docid": "6c8151eee3fcfaec7da724c2a6899e8f", "text": "Classic work on interruptions by Zeigarnik showed that tasks that were interrupted were more likely to be recalled after a delay than tasks that were not interrupted. Much of the literature on interruptions has been devoted to examining this effect, although more recently interruptions have been used to choose between competing designs for interfaces to complex devices. However, none of this work looks at what makes some interruptions disruptive and some not. This series of experiments uses a novel computer-based adventure-game methodology to investigate the effects of the length of the interruption, the similarity of the interruption to the main task, and the complexity of processing demanded by the interruption. It is concluded that subjects make use of some form of nonarticulatory memory which is not affected by the length of the interruption. It is affected by processing similar material however, and by a complex mentalarithmetic task which makes large demands on working memory.", "title": "" }, { "docid": "eaa175d9bb7c86c1750936389439e208", "text": "We present data from detailed observation of 24 information workers that shows that they experience work fragmentation as common practice. We consider that work fragmentation has two components: length of time spent in an activity, and frequency of interruptions. We examined work fragmentation along three dimensions: effect of collocation, type of interruption, and resumption of work. We found work to be highly fragmented: people average little time in working spheres before switching and 57% of their working spheres are interrupted. Collocated people work longer before switching but have more interruptions. Most internal interruptions are due to personal work whereas most external interruptions are due to central work. Though most interrupted work is resumed on the same day, more than two intervening activities occur before it is. We discuss implications for technology design: how our results can be used to support people to maintain continuity within a larger framework of their working spheres.", "title": "" } ]
[ { "docid": "d1e6378b7909a6200b35a7c7e21b2c60", "text": "This paper analyzes and simulates the Li-ion battery charging process for a solar powered battery management system. The battery is charged using a non-inverting synchronous buck-boost DC/DC power converter. The system operates in buck, buck-boost, or boost mode, according to the supply voltage conditions from the solar panels. Rapid changes in atmospheric conditions or sunlight incident angle cause supply voltage variations. This study develops an electrochemical-based equivalent circuit model for a Li-ion battery. A dynamic model for the battery charging process is then constructed based on the Li-ion battery electrochemical model and the buck-boost power converter dynamic model. The battery charging process forms a system with multiple interconnections. Characteristics, including battery charging system stability margins for each individual operating mode, are analyzed and discussed. Because of supply voltage variation, the system can switch between buck, buck-boost, and boost modes. The system is modeled as a Markov jump system to evaluate the mean square stability of the system. The MATLAB based Simulink piecewise linear electric circuit simulation tool is used to verify the battery charging model.", "title": "" }, { "docid": "6aa4b1064833af0c91d16af28136e7e4", "text": "Recently, supervised classification has been shown to work well for the task of speech separation. We perform an in-depth evaluation of such techniques as a front-end for noise-robust automatic speech recognition (ASR). The proposed separation front-end consists of two stages. The first stage removes additive noise via time-frequency masking. The second stage addresses channel mismatch and the distortions introduced by the first stage; a non-linear function is learned that maps the masked spectral features to their clean counterpart. Results show that the proposed front-end substantially improves ASR performance when the acoustic models are trained in clean conditions. We also propose a diagonal feature discriminant linear regression (dFDLR) adaptation that can be performed on a per-utterance basis for ASR systems employing deep neural networks and HMM. Results show that dFDLR consistently improves performance in all test conditions. Surprisingly, the best average results are obtained when dFDLR is applied to models trained using noisy log-Mel spectral features from the multi-condition training set. With no channel mismatch, the best results are obtained when the proposed speech separation front-end is used along with multi-condition training using log-Mel features followed by dFDLR adaptation. Both these results are among the best on the Aurora-4 dataset.", "title": "" }, { "docid": "f013f58d995693a79cd986a028faff38", "text": "We present the design and implementation of a system for axiomatic programming, and its application to mathematical software construction. Key novelties include a direct support for user-defined axioms establishing local equalities between types, and overload resolution based on equational theories and user-defined local axioms. We illustrate uses of axioms, and their organization into concepts, in structured generic programming as practiced in computational mathematical systems.", "title": "" }, { "docid": "17c49edf5842fb918a3bd4310d910988", "text": "In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the image boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.", "title": "" }, { "docid": "49ff096deb6621438286942b792d6af3", "text": "Fast fashion is a business model that offers (the perception of) fashionable clothes at affordable prices. From an operations standpoint, fast fashion requires a highly responsive supply chain that can support a product assortment that is periodically changing. Though the underlying principles are simple, the successful execution of the fast-fashion business model poses numerous challenges. We present a careful examination of this business model and discuss its execution by analyzing the most prominent firms in the industry. We then survey the academic literature for research that is specifically relevant or directly related to fast fashion. Our goal is to expose the main components of fast fashion and to identify untapped research opportunities.", "title": "" }, { "docid": "9676c561df01b794aba095dc66b684f8", "text": "The differentiation of B lymphocytes in the bone marrow is guided by the surrounding microenvironment determined by cytokines, adhesion molecules, and the extracellular matrix. These microenvironmental factors are mainly provided by stromal cells. In this paper, we report the identification of a VCAM-1-positive stromal cell population by flow cytometry. This population showed the expression of cell surface markers known to be present on stromal cells (CD10, CD13, CD90, CD105) and had a fibroblastoid phenotype in vitro. Single cell RT-PCR analysis of its cytokine expression pattern revealed transcripts for haematopoietic cytokines important for either the early B lymphopoiesis like flt3L or the survival of long-lived plasma cells like BAFF or both processes like SDF-1. Whereas SDF-1 transcripts were detectable in all VCAM-1-positive cells, flt3L and BAFF were only expressed by some cells suggesting the putative existence of different subpopulations with distinct functional properties. In summary, the VCAM-1-positive cell population seems to be a candidate stromal cell population supporting either developing B cells and/or long-lived plasma cells in human bone marrow.", "title": "" }, { "docid": "c42f395adaee401acdf31a1211d225f3", "text": "In recent years, research efforts seeking to provide more natural, human-centered means of interacting with computers have gained growing interest. A particularly important direction is that of perceptive user interfaces, where the computer is endowed with perceptive capabilities that allow it to acquire both implicit and explicit information about the user and the environment. Vision has the potential of carrying a wealth of information in a non-intrusive manner and at a low cost, therefore it constitutes a very attractive sensing modality for developing perceptive user interfaces. Proposed approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. In this paper, we focus our attention to vision-based recognition of hand gestures. The first part of the paper provides an overview of the current state of the art regarding the recognition of hand gestures as these are observed and recorded by typical video cameras. In order to make the review of the related literature tractable, this paper does not discuss:", "title": "" }, { "docid": "d562cbba0256fc0066ad1fb22adb1342", "text": "Ship design is a complex endeavor requiring the successful coordination of many disciplines, of both technical and non-technical nature, and of individual experts to arrive at valuable design solutions. Inherently coupled with the design process is design optimization, namely the selection of the best solution out ofmany feasible ones on the basis of a criterion, or rather a set of criteria. A systemic approach to ship design may consider the ship as a complex system integrating a variety of subsystems and their components, for example, subsystems for cargo storage and handling, energy/power generation and ship propulsion, accommodation of crew/passengers and ship navigation. Independently, considering that ship design should actually address the whole ship’s life-cycle, it may be split into various stages that are traditionally composed of the concept/preliminary design, the contractual and detailed design, the ship construction/fabrication process, ship operation for an economic life and scrapping/recycling. It is evident that an optimal ship is the outcome of a holistic optimization of the entire, above-defined ship system over her whole life-cycle. But even the simplest component of the above-defined optimization problem, namely the first phase (conceptual/preliminary design), is complex enough to require to be simplified (reduced) in practice. Inherent to ship design optimization are also the conflicting requirements resulting from the design constraints and optimization criteria (merit or objective functions), reflecting the interests of the various ship design stake holders. The present paper provides a brief introduction to the holistic approach to ship design optimization, defines the generic ship design optimization problem and demonstrates its solution by use of advanced optimization techniques for the computer-aided generation, exploration and selection of optimal designs. It discusses proposed methods on the basis of some typical ship design optimization problems with multiple objectives, leading to improved and partly innovative designs with increased cargo carrying capacity, increased safety and survivability, reduced required powering and improved environmental protection. The application of the proposed methods to the integrated ship system for life-cycle optimization problem remains a challenging but straightforward task for the years to come. © 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "88644bb236b0112bf4825a5020d67629", "text": "A Graphical User Interface (GUI) is the most widely used method whereby information systems interact with users. According to ACM Computing Surveys, on average, more than 45% of software code in a software application is dedicated to the GUI. However, GUI testing is extremely expensive. In unit testing, 10,000 cases can often be automatically tested within a minute whereas, in GUI testing, 10,000 simple GUI test cases need more than 10 hours to complete. To facilitate GUI testing automation, the knowledge model representing the interaction between a user and a computer system is the core. The most advanced GUI testing model to date is the Event Flow Graph (EFG) model proposed by the team of Professor Atif M. Memon at the University of Maryland. The EFG model successfully enabled GUI testing automation for a range of applications. However, it has a number of flaws which prevent it from providing effective GUI testing. Firstly, the EFG model can only model knowledge for basic GUI test automation. Secondly, EFGs are not able to model events with variable follow-up event sets. Thirdly, test cases generation still involves tremendous manual work. This thesis effectively addresses the challenges of existing GUI testing methods and provides a unified solution to GUI testing automation. The three main contributions of this thesis are the proposal of the Graphic User Interface Testing Automation Model", "title": "" }, { "docid": "d99d83f8fbd062ddae5a8ab2d5e19e6d", "text": "A low-distortion super-GOhm subthreshold MOS resistor is designed, fabricated and experimentally validated. The circuit is utilized as a feedback element in the body of a two-stage neural recording amplifier. Linearity is experimentally validated for 0.5 Hz to 5 kHz input frequency and over 0.3 to 0.9 V output voltage dynamic range. The implemented pseudo resistor is also tunable, making the high-pass filter pole adjustable. The circuit is fabricated in 0.13-μm CMOS process and consumes 96 nW from a 1.2 V supply to realize an over 500 GΩ resistance.", "title": "" }, { "docid": "ac4342a829154ebfa7cca35c36619b82", "text": "We present a new approach to robustly solve photometric stereo problems. We cast the problem of recovering surface normals from multiple lighting conditions as a problem of recovering a low-rank matrix with both missing entries and corrupted entries, which model all types of non-Lambertian effects such as shadows and specularities. Unlike previous approaches that use Least-Squares or heuristic robust techniques, our method uses advanced convex optimization techniques that are guaranteed to find the correct low-rank matrix by simultaneously fixing its missing and erroneous entries. Extensive experimental results demonstrate that our method achieves unprecedentedly accurate estimates of surface normals in the presence of significant amount of shadows and specularities. The new technique can be used to improve virtually any photometric stereo method including uncalibrated photometric stereo.", "title": "" }, { "docid": "8970ace14fef5499de4bf810ab66c7ce", "text": "Glioblastoma multiforme is the most common primary malignant brain tumour, with a median survival of about one year. This poor prognosis is due to therapeutic resistance and tumour recurrence after surgical removal. Precisely how recurrence occurs is unknown. Using a genetically engineered mouse model of glioma, here we identify a subset of endogenous tumour cells that are the source of new tumour cells after the drug temozolomide (TMZ) is administered to transiently arrest tumour growth. A nestin-ΔTK-IRES-GFP (Nes-ΔTK-GFP) transgene that labels quiescent subventricular zone adult neural stem cells also labels a subset of endogenous glioma tumour cells. On arrest of tumour cell proliferation with TMZ, pulse-chase experiments demonstrate a tumour re-growth cell hierarchy originating with the Nes-ΔTK-GFP transgene subpopulation. Ablation of the GFP+ cells with chronic ganciclovir administration significantly arrested tumour growth, and combined TMZ and ganciclovir treatment impeded tumour development. Thus, a relatively quiescent subset of endogenous glioma cells, with properties similar to those proposed for cancer stem cells, is responsible for sustaining long-term tumour growth through the production of transient populations of highly proliferative cells.", "title": "" }, { "docid": "fb7026be96349bd201951449498e5477", "text": "Graphs are among the most ubiquitous models of both natural and human-made structures. They can be used to model many types of relations and process dynamics in computer science, physical, biological and social systems. Many problems of practical interest can be represented by graphs. In general graphs theory has a wide range of applications in diverse fields. This paper explores different elements involved in graph theory including graph representations using computer systems and graph-theoretic data structures such as list structure and matrix structure. The emphasis of this paper is on graph applications in computer science. To demonstrate the importance of graph theory in computer science, this article addresses most common applications for graph theory in computer science. These applications are presented especially to project the idea of graph theory and to demonstrate its importance in computer science.", "title": "" }, { "docid": "74c7895313a2f98a5dd4e5c9d5c664bf", "text": "The research was conducted to identify the presence of protein by indicating amide groups and measuring its level in food through specific groups of protein using FTIR (Fourier Transformed Infrared) method. The scanning process was conducted on wavenumber 400—4000 cm -1 . The determination of functional group was being done by comparing wavenumber of amide functional groups of the protein samples to existing standard. Protein level was measured by comparing absorbance of protein specific functional groups to absorbance of fatty acid functional groups. Result showed the FTIR spectrums of all samples were on 557-3381 cm -1 wavenumber range. The amides detected were Amide III, IV, and VI with absorbance between trace until 0.032%. The presence of protein can be detected in samples animal and vegetable cheese, butter, and milk through functional groups of amide III, IV, and VI were on 1240-1265 cm -1 , 713-721 cm -1 , and 551-586 cm -1 wavenumber respectively . Urine was detected through functional groups of amide III and IV were on 1639 cm -1 and 719 cm -1 wavenumber. The protein level of animal cheese, vegetable cheese, butter, and milk were 1.01%, 1.0%, 0.86%, and 1.55% respectively.", "title": "" }, { "docid": "06e53c86f6517dcaa2538f9920b362a5", "text": "In a network topology for forwarding packets various routing protocols are being used. Routers maintain a routing table for successful delivery of the packets from the source node to the correct destined node. The extent of information stored by a router about the network depends on the algorithm it follows. Most of the popular routing algorithms used are RIP, OSPF, IGRP and EIGRP. Here in this paper we are analyzing the performance of these very algorithms on the basis of the cost of delivery, amount of overhead on each router, number of updates needed, failure recovery, delay encountered and resultant throughput of the system. We are trying to find out which protocol suits the best for the network and through a thorough analysis we have tried to find the pros and cons of each protocol.", "title": "" }, { "docid": "77222e2a34cba752b133502bd816f9ab", "text": "To describe the use of a local hemostatic agent (LHA) for the management of postpartum hemorrhage (PPH) due to bleeding of the placental bed in patients taken to caesarean section at Fundación Santa Fe de Bogotá University Hospital. A total of 41 pregnant women who had a caesarean section and developed PPH. A cross-sectional study. Analysis of all cases of PPH during caesarean section presented from 2006 up to and including 2012 at Fundación Santa Fe de Bogotá University Hospital. Emergency hysterectomy due to PPH. The proportion of hysterectomies was 5 vs. 66 % for the group that received and did not receive management with a LHA respectively (PR 0.07, CI 95 % 0.01–0.51 p < 0.01). For the group managed without a LHA, 80 % of patients needed hemoderivatives transfusion vs. 20 % of patients in the group managed with a LHA (PR 0.24, CI 95 % 0.1–0.6 p < 0.01). A reduction in the mean days of hospitalization in addition to a descent in the proportion of patients admitted to the intensive care unit (ICU) was noticed when comparing the group that received a LHA versus the one that did not. An inverse association between the use of a LHA in patients with PPH due to bleeding of the placental bed and the need to perform an emergency obstetric hysterectomy was observed. Additionally there was a significant reduction in the mean duration of hospital stay, use of hemoderivatives and admission to the ICU.", "title": "" }, { "docid": "62ea6783f6a3e6429621286b4a1f068d", "text": "Aviation delays inconvenience travelers and result in financial losses for stakeholders. Without complex data pre-processing, delay data collected by the existing IATA delay coding system are inadequate to support advanced delay analytics, e.g. large-scale delay propagation tracing in an airline network. Consequently, we developed three new coding schemes aiming at improving the current IATA system. These schemes were tested with specific analysis tasks using simulated delay data and were benchmarked against the IATA system. It was found that a coding scheme with a well-designed reporting style can facilitate automated data analytics and data mining, and an improved grouping of delay codes can minimise potential confusion at the data entry and recording stages. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b7f7e80b40f9b8b533811a565270824a", "text": "Many studies over the past two decades have shown that people and animals can use brain signals to convey their intent to a computer using brain-computer interfaces (BCIs). BCI systems measure specific features of brain activity and translate them into control signals that drive an output. The sensor modalities that have most commonly been used in BCI studies have been electroencephalographic (EEG) recordings from the scalp and single- neuron recordings from within the cortex. Over the past decade, an increasing number of studies has explored the use of electro-corticographic (ECoG) activity recorded directly from the surface of the brain. ECoG has attracted substantial and increasing interest, because it has been shown to reflect specific details of actual and imagined actions, and because its technical characteristics should readily support robust and chronic implementations of BCI systems in humans. This review provides general perspectives on the ECoG platform; describes the different electrophysiological features that can be detected in ECoG; elaborates on the signal acquisition issues, protocols, and online performance of ECoG- based BCI studies to date; presents important limitations of current ECoG studies; discusses opportunities for further research; and finally presents a vision for eventual clinical implementation. In summary, the studies presented to date strongly encourage further research using the ECoG platform for basic neuroscientific research, as well as for translational neuroprosthetic applications.", "title": "" } ]
scidocsrr
21f33a54df4d1e710ed02ca54dccd910
Comparing SVM and convolutional networks for epileptic seizure prediction from intracranial EEG
[ { "docid": "dee2b99fd5ae1d48c8e8b29047aa97ce", "text": "Nonlinear time series analysis techniques have been proposed to detect changes in the electroencephalography dynamics prior to epileptic seizures. Their applicability in practice to predict seizure onsets is hampered by the present lack of generally accepted standards to assess their performance. We propose an analytic approach to judge the prediction performance of multivariate seizure prediction methods. Statistical tests are introduced to assess patient individual results, taking into account that prediction methods are applied to multiple time series and several seizures. Their performance is illustrated utilizing a bivariate seizure prediction method based on synchronization theory.", "title": "" } ]
[ { "docid": "eb2d29417686cc86a45c33694688801f", "text": "We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, where the sun is typically not visible. We leverage recent advances in Bayesian Convolutional Neural Networks to train and implement a sun detection model that infers a three-dimensional sun direction vector from a single RGB image. Crucially, our method also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo visual odometry pipeline where accurate uncertainty estimates are critical for optimal data fusion. Our Bayesian sun detection model achieves a median error of approximately 12 degrees on the KITTI odometry benchmark training set, and yields improvements of up to 42% in translational ARMSE and 32% in rotational ARMSE compared to standard VO. An open source implementation of our Bayesian CNN sun estimator (Sun-BCNN) using Caffe is available at https://github.com/utiasSTARS/sun-bcnn-vo.", "title": "" }, { "docid": "7e7314256a28deb2250377e9e74c5413", "text": "After stress, the brain is exposed to waves of stress mediators, including corticosterone (in rodents) and cortisol (in humans). Corticosteroid hormones affect neuronal physiology in two time-domains: rapid, non-genomic actions primarily via mineralocorticoid receptors; and delayed genomic effects via glucocorticoid receptors. In parallel, cognitive processing is affected by stress hormones. Directly after stress, emotional behaviour involving the amygdala is strongly facilitated with cognitively a strong emphasis on the \"now\" and \"self,\" at the cost of higher cognitive processing. This enables the organism to quickly and adequately respond to the situation at hand. Several hours later, emotional circuits are dampened while functions related to the prefrontal cortex and hippocampus are promoted. This allows the individual to rationalize the stressful event and place it in the right context, which is beneficial in the long run. The brain's response to stress depends on an individual's genetic background in interaction with life events. Studies in rodents point to the possibility to prevent or reverse long-term consequences of early life adversity on cognitive processing, by normalizing the balance between the two receptor types for corticosteroid hormones at a critical moment just before the onset of puberty.", "title": "" }, { "docid": "b9d12a2c121823a81902375f6be893bb", "text": "Internet users are often victimized by malicious attackers. Some attackers infect and use innocent users’ machines to launch large-scale attacks without the users’ knowledge. One of such attacks is the click-fraud attack. Click-fraud happens in Pay-Per-Click (PPC) ad networks where the ad network charges advertisers for every click on their ads. Click-fraud has been proved to be a serious problem for the online advertisement industry. In a click-fraud attack, a user or an automated software clicks on an ad with a malicious intent and advertisers need to pay for those valueless clicks. Among many forms of click-fraud, botnets with the automated clickers are the most severe ones. In this paper, we present a method for detecting automated clickers from the user-side. The proposed method to Fight Click-Fraud, FCFraud, can be integrated into the desktop and smart device operating systems. Since most modern operating systems already provide some kind of anti-malware service, our proposed method can be implemented as a part of the service. We believe that an effective protection at the operating system level can save billions of dollars of the advertisers. Experiments show that FCFraud is 99.6% (98.2% in mobile ad library generated traffic) accurate in classifying ad requests from all user processes and it is 100% successful in detecting clickbots in both desktop and mobile devices. We implement a cloud backend for the FCFraud service to save battery power in mobile devices. The overhead of executing FCFraud is also analyzed and we show that it is reasonable for both the platforms. Copyright c © 2016 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "8230003e8be37867e0e4fc7320e24448", "text": "This document was approved as policy of the American Psychological Association (APA) by the APA Council of Representatives in August, 2002. This document was drafted by a joint Task Force of APA Divisions 17 (Counseling Psychology) and 45 (The Society for the Psychological Study of Ethnic Minority Issues). These guidelines have been in the process of development for 22 years, so many individuals and groups require acknowledgement. The Divisions 17/45 writing team for the present document included Nadya Fouad, PhD, Co–Chair, Patricia Arredondo, EdD, Co– Chair, Michael D'Andrea, EdD and Allen Ivey, EdD. These guidelines build on work related to multicultural counseling competencies by Division 17 (Sue et al., 1982) and the Association of Multicultural Counseling and Development (Arredondo et al., 1996; Sue, Arredondo, & McDavis, 1992). The Task Force acknowledges Allen Ivey, EdD, Thomas Parham, PhD, and Derald Wing Sue, PhD for their leadership related to the work on competencies. The Divisions 17/45 writing team for these guidelines was assisted in reviewing the relevant literature by Rod Goodyear, PhD, Jeffrey S. Mio, PhD, Ruperto (Toti) Perez, PhD, William Parham, PhD, and Derald Wing Sue, PhD. Additional writing contributions came from Gail Hackett, PhD, Jeanne Manese, PhD, Louise Douce, PhD, James Croteau, PhD, Janet Helms, PhD, Sally Horwatt, PhD, Kathleen Boggs, PhD, Gerald Stone, PhD, and Kathleen Bieschke, PhD. Editorial contributions were provided by Nancy Downing Hansen, PhD, Patricia Perez, Tiffany Rice, and Dan Rosen. The Task Force is grateful for the active support and contributions of a series of presidents of APA Divisions 17, 35, and 45, including Rosie Bingham, PhD, Jean Carter, PhD, Lisa Porche Burke, PhD, Gerald Stone, PhD, Joseph Trimble, PhD, Melba Vasquez, PhD, and Jan Yoder, PhD. Other individuals who contributed through their advocacy include Guillermo Bernal, PhD, Robert Carter, PhD, J. Manuel Casas, PhD, Don Pope–Davis, PhD, Linda Forrest, PhD, Margaret Jensen, PhD, Teresa LaFromboise, PhD, Joseph G. Ponterotto, PhD, and Ena Vazquez Nuttall, EdD.", "title": "" }, { "docid": "6f0d9f383c0142b43ea440e6efb2a59a", "text": "OBJECTIVES\nTo evaluate the effect of a probiotic product in acute self-limiting gastroenteritis in dogs.\n\n\nMETHODS\nThirty-six dogs suffering from acute diarrhoea or acute diarrhoea and vomiting were included in the study. The trial was performed as a randomised, double blind and single centre study with stratified parallel group design. The animals were allocated to equal looking probiotic or placebo treatment by block randomisation with a fixed block size of six. The probiotic cocktail consisted of thermo-stabilised Lactobacillus acidophilus and live strains of Pediococcus acidilactici, Bacillus subtilis, Bacillus licheniformis and Lactobacillus farciminis.\n\n\nRESULTS\nThe time from initiation of treatment to the last abnormal stools was found to be significantly shorter (P = 0.04) in the probiotic group compared to placebo group, the mean time was 1.3 days and 2.2 days, respectively. The two groups were found nearly equal with regard to time from start of treatment to the last vomiting episode.\n\n\nCLINICAL SIGNIFICANCE\nThe probiotic tested may reduce the convalescence time in acute self-limiting diarrhoea in dogs.", "title": "" }, { "docid": "d3e35963e85ade6e3e517ace58cb3911", "text": "In this paper, we present the design and evaluation of PeerDB, a peer-to-peer (P2P) distributed data sharing system. PeerDB distinguishes itself from existing P2P systems in several ways. First, it is a full-fledge data management system that supports fine-grain content-based searching. Second, it facilitates sharing of data without shared schema. Third, it combines the power of mobile agents into P2P systems to perform operations at peers’ sites. Fourth, PeerDB network is self-configurable, i.e., a node can dynamically optimize the set of peers that it can communicate directly with based on some optimization criterion. By keeping peers that provide most information or services in close proximity (i.e, direct communication), the network bandwidth can be better utilized and system performance can be optimized. We implemented and evaluated PeerDB on a cluster of 32 Pentium II PCs. Our experimental results show that PeerDB can effectively exploit P2P technologies for distributed data sharing.", "title": "" }, { "docid": "709a8a9d4afe6db277b5546c5c72bfd6", "text": "This paper presents a new high-voltage pulse generator, which is based on positive and negative buck-boost (BB) converters fed from a relatively low voltage dc supply. The proposed generator is able to generate unipolar or bipolar high-voltage pulses via operating the BB converters with series or parallel connected outputs respectively using a common input dc source. The components of each converter are rated at half of the pulsed voltage magnitude in the unipolar mode. The converters in the proposed pulse generator operate in discontinuous conduction mode. This enhances system efficiency, as the circuit only operates when it is desired to generate a pulsed output voltage, otherwise the circuit is switched to idle mode with zero current. Detailed illustration of the proposed approach is presented along with a full design of the system components for given output pulse specifications. Finally, simulation and experimental results are presented to validate the proposed concept.", "title": "" }, { "docid": "3a1f2070cad8641d9116c3738a36e5bc", "text": "Several real-world prediction problems are subject to changes over time due to their dynamic nature. These changes, named concept drift, usually lead to immediate and disastrous loss in classifier's performance. In order to cope with such a serious problem, drift detection methods have been proposed in the literature. However, current methods cannot be widely used since they are based either on performance monitoring or on fully labeled data, or even both. Focusing on overcoming these drawbacks, in this work we propose using density variation of the most significant instances as an explicit unsupervised trigger for concept drift detection. Here, density variation is based on Active Learning, and it is calculated from virtual margins projected onto the input space according to classifier confidence. In order to investigate the performance of the proposed method, we have carried out experiments on six databases, precisely four synthetic and two real databases focusing on setting up all parameters involved in our method and on comparing it to three baselines, including two supervised drift detectors and one Active Learning-based strategy. The obtained results show that our method, when compared to the supervised baselines, reached better recognition rates in the majority of the investigated databases, while keeping similar or higher detection rates. In terms of the Active Learning-based strategies comparison, our method outperformed the baseline taking into account both recognition and detection rates, even though the baseline employed much less labeled samples. Therefore, the proposed method established a better trade-off between amount of labeled samples and detection capability, as well as recognition rate.", "title": "" }, { "docid": "f5135c2d6038efe58fd3b8f6d17fc589", "text": "Aim: The foremost aim of the study was to investigate and analyze the relationship of General Mental Ability, Interest and home environmentwith Academic Achievement.Methods:The participants were 110 students drawn from three KendryaVidyalayas of Delhi. Their ages ranged between 13 and 14 with a mean age of 13.6 years. Two validated instruments were used to elicit responses from the participants-General mental ability test prepared by R. K Tandon (1972),Multiphasic Interest Inventory of S. K. Bawa (1998) and Home EnvironmentInventory of K S Mishra (1989) were administered on the selected sample. Whereas their annual examination grades of class VIIwere considered as academic achievement.Findings: Four major hypotheses were formulated and tested at 0.01 level of significance. Pearson-Moment Correlation Co-efficient and t-test were used to analyze the data. The study reveals that General Mental Ability, home environmentInterest and academic achievement are significantly and positively correlated. Whereas the high score of girls indicates that they are superior to boys.", "title": "" }, { "docid": "cc92787280db22c46a159d95f6990473", "text": "A novel formulation for the voltage waveforms in high efficiency linear power amplifiers is described. This formulation demonstrates that a constant optimum efficiency and output power can be obtained over a continuum of solutions by utilizing appropriate harmonic reactive impedance terminations. A specific example is confirmed experimentally. This new formulation has some important implications for the possibility of realizing broadband >10% high efficiency linear RF power amplifiers.", "title": "" }, { "docid": "a7fea910d7ecb4de5cbf3e22a1a6f51b", "text": "In this paper, we propose a set of dimensioning rules, which deliver high quality session-based services over a Next Generation Network based IP/MPLS transport infrastructure. In particular, we develop a detailed dimensioning methodology for improving a target QoS requirement. The proposed methodology outlines an optimal equipment allocation strategy for a requested capacity. The benefits of operating a network under the paradigm of generous dimensioning, for converged multiservice traffic flows, include target QoS guarantee, scalability, and network resilience. We present and discuss experimental results which illustrate a practical implementation of the proposed dimensioning strategy and its benefits.", "title": "" }, { "docid": "e158971a53492c6b9e21116da891de1a", "text": "A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for the ability of these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, and neural network theory, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new algorithm, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep neural network training, and provide preliminary numerical evidence for its superior performance.", "title": "" }, { "docid": "7a05f2c12c3db9978807eb7c082db087", "text": "This paper discusses the importance, the complexity and the challenges of mapping mobile robot’s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.", "title": "" }, { "docid": "8f5747a5503c9e5ab1945e2ac42516a4", "text": "Mental wellbeing is the combination of feeling good and functioning well. Digital technology widens the opportunities for promoting mental wellbeing, particularly among those young people for whom technology is an ordinary part of life. This paper presents an initial review of publicly available apps and websites that have a primary purpose of promoting mental wellbeing. The review was in two stages: first, the interdisciplinary research team identified and reviewed 14 apps/websites, then 13 young people (7 female, 6 male) aged 12–18 years reviewed 11 of the apps/websites. Overall, the reviewers’ views were positive, although some significant criticisms were made. Based on the findings of the study, initial recommendations are offered to improve the design of apps/websites for promoting mental wellbeing among young people aged 12–18 years: highlight any age limits, provide information on mental wellbeing, improve findability, ensure accessibility on school computers, and highlight if young people were involved in design.", "title": "" }, { "docid": "023302562ddfe48ac81943fedcf881b7", "text": "Knitty is an interactive design system for creating knitted animals. The user designs a 3D surface model using a sketching interface. The system automatically generates a knitting pattern and then visualizes the shape of the resulting 3D animal model by applying a simple physics simulation. The user can see the resulting shape before beginning the actual knitting. The system also provides a production assistant interface for novices. The user can easily understand how to knit each stitch and what to do in each step. In a workshop for novices, we observed that even children can design their own knitted animals using our system.", "title": "" }, { "docid": "188d26992b9b30495fa1c432cf49d649", "text": "We consider the problem of dynamically maintaining (approximate) all-pairs effective resistances in separable graphs, which are those that admit an n-separator theorem for some c < 1. We give a fully dynamic algorithm that maintains (1 + ε)-approximations of the allpairs effective resistances of an n-vertex graph G undergoing edge insertions and deletions with Õ( √ n/ε) worst-case update time and Õ( √ n/ε) worst-case query time, if G is guaranteed to be √ n-separable (i.e., it is taken from a class satisfying a √ n-separator theorem) and its separator can be computed in Õ(n) time. Our algorithm is built upon a dynamic algorithm for maintaining approximate Schur complement that approximately preserves pairwise effective resistances among a set of terminals for separable graphs, which might be of independent interest. We complement our result by proving that for any two fixed vertices s and t, no incremental or decremental algorithm can maintain the s − t effective resistance for √n-separable graphs with worst-case update time O(n) and query time O(n) for any δ > 0, unless the Online Matrix Vector Multiplication (OMv) conjecture is false. We further show that for general graphs, no incremental or decremental algorithm can maintain the s− t effective resistance problem with worst-case update time O(n) and querytime O(n) for any δ > 0, unless the OMv conjecture is false. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 340506. University of Vienna, Faculty of Computer Science, Vienna, Austria. E-mail: gramoz.goranci@univie.ac.at. University of Vienna, Faculty of Computer Science, Vienna, Austria. E-mail: monika.henzinger@univie.ac.at. Department of Computer Science, University of Sheffield, Sheffield, UK. E-mail: p.peng@sheffield.ac.uk. Work done in part while at the Faculty of Computer Science, University of Vienna, Austria.", "title": "" }, { "docid": "9362781ea97715077d54e8e9645552e2", "text": "Web sites are often a mixture of static sites and programs that integrate relational databases as a back-end. Software that implements Web sites continuously evolve to meet ever-changing user needs. As a Web sites evolve, new versions of programs, interactions and functionalities are added and existing ones are removed or modified. Web sites require configuration and programming attention to assure security, confidentiality, and trustiness of the published information. During evolution of Web software, from one version to the next one, security flaws may be introduced, corrected, or ignored. This paper presents an investigation of the evolution of security vulnerabilities as detected by propagating and combining granted authorization levels along an inter-procedural control flow graph (CFG) together with required security levels for DB accesses with respect to SQL-injection attacks. The paper reports results about experiments performed on 31 versions of phpBB, that is a publicly available bulletin board written in PHP, version 1.0.0 (9547 LOC) to version 2.0.22 (40663 LOC) have been considered as a case study. Results show that the vulnerability analysis can be used to observe and monitor the evolution of security vulnerabilities in subsequent versions of the same software package. Suggestions for further research are also presented.", "title": "" }, { "docid": "0e1f0eb73d2e27269ad305645eb4e236", "text": "Multi-label learning deals with data associated with multiple labels simultaneously. Previous work on multi-label learning assumes that for each instance, the “full” label set associated with each training instance is given by users. In many applications, however, to get the full label set for each instance is difficult and only a “partial” set of labels is available. In such cases, the appearance of a label means that the instance is associated with this label, while the absence of a label does not imply that this label is not proper for the instance. We call this kind of problem “weak label” problem. In this paper, we propose the WELL (WEak Label Learning) method to solve the weak label problem. We consider that the classification boundary for each label should go across low density regions, and that each label generally has much smaller number of positive examples than negative examples. The objective is formulated as a convex optimization problem which can be solved efficiently. Moreover, we exploit the correlation between labels by assuming that there is a group of low-rank base similarities, and the appropriate similarities between instances for different labels can be derived from these base similarities. Experiments validate the performance of WELL.", "title": "" }, { "docid": "4021a6d34ca5a6c3d2d021d0ba2cbcf7", "text": "Visual compatibility is critical for fashion analysis, yet is missing in existing fashion image synthesis systems. In this paper, we propose to explicitly model visual compatibility through fashion image inpainting. To this end, we present Fashion Inpainting Networks (FiNet), a two-stage image-to-image generation framework that is able to perform compatible and diverse inpainting. Disentangling the generation of shape and appearance to ensure photorealistic results, our framework consists of a shape generation network and an appearance generation network. More importantly, for each generation network, we introduce two encoders interacting with one another to learn latent code in a shared compatibility space. The latent representations are jointly optimized with the corresponding generation network to condition the synthesis process, encouraging a diverse set of generated results that are visually compatible with existing fashion garments. In addition, our framework is readily extended to clothing reconstruction and fashion transfer, with impressive results. Extensive experiments with comparisons with state-of-the-art approaches on fashion synthesis task quantitatively and qualitatively demonstrate the effectiveness of our method.", "title": "" }, { "docid": "ef2738cfced7ef069b13e5b5dca1558b", "text": "Organic agriculture (OA) is practiced on 1% of the global agricultural land area and its importance continues to grow. Specifically, OA is perceived by many as having less Advances inAgronomy, ISSN 0065-2113 © 2016 Elsevier Inc. http://dx.doi.org/10.1016/bs.agron.2016.05.003 All rights reserved. 1 ARTICLE IN PRESS", "title": "" } ]
scidocsrr
31480d5ea5075eaa3c6ae5e5ec7acaec
Improved competitive learning neural networks for network intrusion and fraud detection
[ { "docid": "342e3fd05878ebff3bc2686fb05009f5", "text": "Due to a rapid advancement in the electronic commerce technology, use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment, credit card frauds are becoming increasingly rampant in recent years. In this paper, we model the sequence of operations in credit card transaction processing using a confidence-based neural network. Receiver operating characteristic (ROC) analysis technology is also introduced to ensure the accuracy and effectiveness of fraud detection. A neural network is initially trained with synthetic data. If an incoming credit card transaction is not accepted by the trained neural network model (NNM) with sufficiently low confidence, it is considered to be fraudulent. This paper shows how confidence value, neural network algorithm and ROC can be combined successfully to perform credit card fraud detection.", "title": "" } ]
[ { "docid": "cddc28eb1ec92d4452f1b5b03a910b22", "text": "The English perfect involves two fundamental components of meaning: a truth-conditional component involving temporal notions and a presupposition best expressed in terms drawn from the analysis of modality. The semantics draws much for the Extended Now theory (McCoard 1978 and others), but improves on it by showing that many aspects of the perfect's temporal contribution may be factored out into independent semantic or pragmatic principles. The pragmatic analysis unifies views of the function of the perfect as indicating either a 'result state' or the 'current relevance' of some past event. This unification is naturally stated in terms similar to those used to explain the context-dependency of modals.", "title": "" }, { "docid": "5bd68fec2fc44048678a4f72f243ae1d", "text": "Recently, Vehicular Ad Hoc Networks (VANET) have attracted the attention of research communities, leading car manufacturers, and governments due to their potential applications and specific characteristics. Their research outcome was started with awareness between vehicles for collision avoidance and Internet access and then expanded to vehicular multimedia communications. Moreover, vehicles’ high computation, communication, and storage resources set a ground for vehicular networks to deploy these applications in the near future. Nevertheless, on-board resources in vehicles are mostly underutilized. Vehicular Cloud Computing (VCC) is developed to utilize the VANET resources efficiently and provide subscribers safe infotainment services. In this chapter, the authors perform a survey of state-of-the-art vehicular cloud computing as well as the existing techniques that utilize cloud computing for performance improvements in VANET. The authors then classify the VCC based on the applications, service types, and vehicular cloud organization. They present the detail for each VCC application and formation. Lastly, the authors discuss the open issues and research directions related to VANET cloud computing. Kayhan Zrar Ghafoor Koya University, Iraq Marwan Aziz Mohammed Koya University, Iraq Kamalrulnizam Abu Bakar Universiti Teknologi Malaysia, Malaysia Ali Safa Sadiq Universiti Teknologi Malaysia, Malaysia Jaime Lloret Universidad Politecnica de Valencia, Spain DOI: 10.4018/978-1-4666-4781-7.ch014", "title": "" }, { "docid": "322161b4a43b56e4770d239fe4d2c4c0", "text": "Graph pattern matching has become a routine process in emerging applications such as social networks. In practice a data graph is typically large, and is frequently updated with small changes. It is often prohibitively expensive to recompute matches from scratch via batch algorithms when the graph is updated. With this comes the need for incremental algorithms that compute changes to the matches in response to updates, to minimize unnecessary recomputation. This paper investigates incremental algorithms for graph pattern matching defined in terms of graph simulation, bounded simulation and subgraph isomorphism. (1) For simulation, we provide incremental algorithms for unit updates and certain graph patterns. These algorithms are optimal: in linear time in the size of the changes in the input and output, which characterizes the cost that is inherent to the problem itself. For general patterns we show that the incremental matching problem is unbounded, i.e., its cost is not determined by the size of the changes alone. (2) For bounded simulation, we show that the problem is unbounded even for unit updates and path patterns. (3) For subgraph isomorphism, we show that the problem is intractable and unbounded for unit updates and path patterns. (4) For multiple updates, we develop an incremental algorithm for each of simulation, bounded simulation and subgraph isomorphism. We experimentally verify that these incremental algorithms significantly outperform their batch counterparts in response to small changes, using real-life data and synthetic data.", "title": "" }, { "docid": "b6d8ba656a85955be9b4f34b07f54987", "text": "In real-world data, e.g., from Web forums, text is often contaminated with redundant or irrelevant content, which leads to introducing noise in machine learning algorithms. In this paper, we apply Long Short-Term Memory networks with an attention mechanism, which can select important parts of text for the task of similar question retrieval from community Question Answering (cQA) forums. In particular, we use the attention weights for both selecting entire sentences and their subparts, i.e., word/chunk, from shallow syntactic trees. More interestingly, we apply tree kernels to the filtered text representations, thus exploiting the implicit features of the subtree space for learning question reranking. Our results show that the attention-based pruning allows for achieving the top position in the cQA challenge of SemEval 2016, with a relatively large gap from the other participants while greatly decreasing running time.", "title": "" }, { "docid": "80c745ee8535d9d53819ced4ad8f996d", "text": "Wireless Sensor Networks (WSN) are vulnerable to various sensor faults and faulty measurements. This vulnerability hinders efficient and timely response in various WSN applications, such as healthcare. For example, faulty measurements can create false alarms which may require unnecessary intervention from healthcare personnel. Therefore, an approach to differentiate between real medical conditions and false alarms will improve remote patient monitoring systems and quality of healthcare service afforded by WSN. In this paper, a novel approach is proposed to detect sensor anomaly by analyzing collected physiological data from medical sensors. The objective of this method is to effectively distinguish false alarms from true alarms. It predicts a sensor value from historic values and compares it with the actual sensed value for a particular instance. The difference is compared against a threshold value, which is dynamically adjusted, to ascertain whether the sensor value is anomalous. The proposed approach has been applied to real healthcare datasets and compared with existing approaches. Experimental results demonstrate the effectiveness of the proposed system, providing high Detection Rate (DR) and low False Positive Rate (FPR).", "title": "" }, { "docid": "19c93bdba44de7d2d8e2f7e1a412d35a", "text": "Intense interest in applying convolutional neural networks (CNNs) in biomedical image analysis is wide spread, but its success is impeded by the lack of large annotated datasets in biomedical imaging. Annotating biomedical images is not only tedious and time consuming, but also demanding of costly, specialty - oriented knowledge and skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method called AIFT (active, incremental fine-tuning) to naturally integrate active learning and transfer learning into a single framework. AIFT starts directly with a pre-trained CNN to seek worthy samples from the unannotated for annotation, and the (fine-tuned) CNN is further fine-tuned continuously by incorporating newly annotated samples in each iteration to enhance the CNNs performance incrementally. We have evaluated our method in three different biomedical imaging applications, demonstrating that the cost of annotation can be cut by at least half. This performance is attributed to the several advantages derived from the advanced active and incremental capability of our AIFT method.", "title": "" }, { "docid": "5e6a2439641793594087d0543fcaec99", "text": "Background: Virtual Machine (VM) consolidation is an effective technique to improve resource utilization and reduce energy footprint in cloud data centers. It can be implemented in a centralized or a distributed fashion. Distributed VM consolidation approaches are currently gaining popularity because they are often more scalable than their centralized counterparts and they avoid a single point of failure. Objective: To present a comprehensive, unbiased overview of the state-of-the-art on distributed VM consolidation approaches. Method: A Systematic Mapping Study (SMS) of the existing distributed VM consolidation approaches. Results: 19 papers on distributed VM consolidation categorized in a variety of ways. The results show that the existing distributed VM consolidation approaches use four types of algorithms, optimize a number of different objectives, and are often evaluated with experiments involving simulations. Conclusion: There is currently an increasing amount of interest on developing and evaluating novel distributed VM consolidation approaches. A number of research gaps exist where the focus of future research may be directed.", "title": "" }, { "docid": "be597281ff92bb368803b5e5fe584f9c", "text": "We describe the results of the Transformation Tool Contest 2010 workshop, in which nine graph and model transformation tools were compared for specifying model migration. The model migration problem—migration of UML activity diagrams from version 1.4 to version 2.2—is non-trivial and practically relevant. The solutions have been compared with respect to several criteria: correctness, conciseness, understandability, appropriateness, maturity and support for extensions to the core migration task. We describe in detail the comparison method, and discuss the strengths and weaknesses of the solutions with a special focus on the differences between graph and model transformation for model migration. The comparison results demonstrate tool and language features that strongly impact the efficacy of solutions, such as support for retyping of model elements. The results are used to motivate an agenda for future model migration research (including suggestions for areas in which the tools need to be further improved).", "title": "" }, { "docid": "d7a348b092064acf2d6a4fd7d6ef8ee2", "text": "Argumentation theory involves the analysis of naturally occurring argument, and one key tool employed to this end both in the academic community and in teaching critical thinking skills to undergraduates is argument diagramming. By identifying the structure of an argument in terms of its constituents and the relationships between them, it becomes easier to critically evaluate each part of an argument in turn. The task of analysis and diagramming, however, is labor intensive and often idiosyncratic, which can make academic exchange difficult. The Araucaria system provides an interface which supports the diagramming process, and then saves the result using AML, an open standard, designed in XML, for describing argument structure. Araucaria aims to be of use not only in pedagogical situations, but also in support of research activity. As a result, it has been designed from the outset to handle more advanced argumentation theoretic concepts such as schemes, which capture stereotypical patterns of reasoning. The software is also designed to be compatible with a number of applications under development, including dialogic interaction and online corpus provision. Together, these features, combined with its platform independence and ease of use, have the potential to make Araucaria a valuable resource for the academic community.", "title": "" }, { "docid": "44d2cecbca397598e02ce34d8e396da8", "text": "In this paper, a CMOS sub-1-V nanopower reference is proposed, which is implemented without resistors and with only standard CMOS transistors. The proposed circuit has the most attractive merit that it can afford reference current and reference voltage simultaneously. Moreover, the leakage compensation technique is utilized, and thus it has very low temperature coefficient for a wide temperature range. The proposed circuit is verified by SPICE simulation with CMOS 0.18um process. The temperature coefficient of the reference voltage and reference current are 0.0037%/°C and 0.0091%/°C, respectively. Also, the power supply voltage can be as low as 0.85V and its power consumption is only 5.1nW.", "title": "" }, { "docid": "c8ff17e01a592a0b3324dd662e9b9f5a", "text": "Nowaday, People can access information rapidly due to technology is higher than former times. Website is tool for present information of organization. The most important thing which consider is website content because of the content will explain everything which organization presented. The most of educational institution use website for presented their information to outside. However, educational institution website made people trust of them. Especially, Ranking of educational institution website will add value to the institution too. There are many ranking which ranked website of education for different purpose. Webometrics Ranking of World University is the ranking of education institution website which assessed accessibility of website content. Whereof, propose of the ranking is evaluate the accessibility of website which focus on visibility and activities. This article aims to evaluate the total number of backlinks, number of page view, duration of visit average times and bounce rate of Thailand University's website to finding relationship between number of backlinks, page view average, visit durations times, percentage of bounce rate and impact ranking of webometrics.", "title": "" }, { "docid": "136765561a6d83c049f8eef005979596", "text": "Generative networks have become ubiquitous in image generation applications like image super-resolution, image to image translation, and text to image synthesis. They are usually composed of convolutional (CONV) layers, convolution-based residual blocks, and deconvolutional (DeCONV) layers. Previous works on neural network acceleration focus too much on optimizing CONV layers computation such as data-reuse or parallel computation, but have low processing element (PE) utilization in computing residual blocks and DeCONV layers: residual blocks require very high memory bandwidth when performing elementwise additions on residual paths; DeCONV layers have imbalanced operation counts for different outputs. In this paper, we propose a dual convolution mapping method for CONV and DeCONV layers to make full use of the available PE resources. A cross-layer scheduling method is also proposed to avoid extra off-chip memory access in residual block processing. Precision-adaptive PEs and buffer bandwidth reconfiguration are used to support flexible bitwidths for both inputs and weights in deep neural networks. We implement a generative network accelerator (GNA) based on intra-PE processing, inter-PE processing, and cross-layer scheduling techniques. Owing to the proposed optimization techniques, GNA achieves energy efficiency of 2.05 TOPS/W with 61% higher PE utilization than traditional methods in generative network acceleration.", "title": "" }, { "docid": "f7e773113b9006256ab51d975c8f53c5", "text": "Received 12/4/2013 Accepted 19/6/2013 (006063) 1 Laboratorio Integral de Investigación en Alimentos – LIIA, Instituto Tecnológico de Tepic – ITT, Av. Tecnológico, 2595, CP 63175, Tepic, Nayarit, México, e-mail: efimontalvo@gmail.com 2 Dirección General de Innovación Tecnológica, Centro de Excelencia, Universidad Autónoma de Tamaulipas – UAT, Ciudad Victoria, Tamaulipas, México 3 Centro de Investigación en Ciencia Aplicada y Tecnología Avanzada – CICATA, Instituto Politécnico Nacional – IPN, Querétaro, Querétaro, México *Corresponding author Effect of high hydrostatic pressure on antioxidant content of ‘Ataulfo’ mango during postharvest maturation Viviana Guadalupe ORTEGA1, José Alberto RAMÍREZ2, Gonzalo VELÁZQUEZ3, Beatriz TOVAR1, Miguel MATA1, Efigenia MONTALVO1*", "title": "" }, { "docid": "0398e0ea6f0bf40a90a152616f418016", "text": "The next flagship supercomputer in Japan, replacement of K supercomputer, is being designed toward general operation in 2020. Compute nodes, based on a manycore architecture, connected by a 6-D mesh/torus network is considered. A three level hierarchical storage system is taken into account. A heterogeneous operating system, Linux and a light-weight kernel, is designed to build suitable environments for applications. It cannot be possible without codesign of applications that the system software is designed to make maximum utilization of compute and storage resources. After a brief introduction of the post K supercomputer architecture, the design issues of the system software will be presented. Two big-data applications, genome processing and meteorological and global environmental predictions will be sketched out as target applications in the system software design. Then, it will be presented how these applications' demands affect the system software.", "title": "" }, { "docid": "19a785bf6dd9102629d2d94aa7a489d8", "text": "Parallel sorting networks are widely employed in hardware implementations for sorting due to their high data parallelism and low control overhead. In this paper, we propose an energy and memory efficient mapping methodology for implementing bitonic sorting network on FPGA. Using this methodology, the proposed sorting architecture can be built for a given data parallelism while supporting continuous data streams. We propose a streaming permutation network (SPN) by \"folding\" the classic Clos network. We prove that the SPN is programmable to realize all the interconnection patterns in the bitonic sorting network. A low cost design for sorting with minimal resource usage is obtained by reusing one SPN . We also demonstrate a high throughput design by trading off area for performance. With a data parallelism of p (2 ≤ p ≤ N/ log2 N), the high throughput design sorts an N-key sequence with latency O(N/p), throughput (# of keys sorted per cycle) O(p) and uses O(N) memory. This achieves optimal memory efficiency (defined as the ratio of throughput to the amount of on-chip memory used by the design) of O(p/N). Another noteworthy feature of the high throughput design is that only single-port memory rather than dual-port memory is required for processing continuous data streams. This results in 50% reduction in memory consumption. Post place-and-route results show that our architecture demonstrates 1.3x ∼1.6x improvment in energy efficiency and 1.5x ∼ 5.3x better memory efficiency compared with the state-of-the-art designs.", "title": "" }, { "docid": "ad53198bab3ad3002b965914f92ce3c9", "text": "Adaptive Learning Algorithms for Transferable Visual Recognition by Judith Ho↵man Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor Trevor Darrell, Chair Understanding visual scenes is a crucial piece in many artificial intelligence applications ranging from autonomous vehicles and household robotic navigation to automatic image captioning for the blind. Reliably extracting high-level semantic information from the visual world in real-time is key to solving these critical tasks safely and correctly. Existing approaches based on specialized recognition models are prohibitively expensive or intractable due to limitations in dataset collection and annotation. By facilitating learned information sharing between recognition models these applications can be solved; multiple tasks can regularize one another, redundant information can be reused, and the learning of novel tasks is both faster and easier. In this thesis, I present algorithms for transferring learned information between visual data sources and across visual tasks all with limited human supervision. I will both formally and empirically analyze the adaptation of visual models within the classical domain adaptation setting and extend the use of adaptive algorithms to facilitate information transfer between visual tasks and across image modalities. Most visual recognition systems learn concepts directly from a large collection of manually annotated images/videos. A model which detects pedestrians requires a human to manually go through thousands or millions of images and indicate all instances of pedestrians. However, this model is susceptible to biases in the labeled data and often fails to generalize to new scenarios a detector trained in Palo Alto may have degraded performance in Rome, or a detector trained in sunny weather may fail in the snow. Rather than require human supervision for each new task or scenario, this work draws on deep learning, transformation learning, and convex-concave optimization to produce novel optimization frameworks which transfer information from the large curated databases to real world scenarios.", "title": "" }, { "docid": "5f8a2db77dfa71ea2051a1a92b97f1f5", "text": "Online communities are getting increasingly important for several different user groups; at the same time, community members seem to lack loyalty, as they often change from one community to another or use their community less over time. To survive and thrive, online communities must meet members' needs. By using qualitative data are from an extensive online survey of online community users and a representative sample of Internet users, 200 responses to an open quesion regarding community-loyalty was analyzed. Results show that there are 9 main reasons why community-users decrease in their participation over time or, in simple terms, stop using their online community: 1) Lack of interesting people/friends attending, 2) Low quality content, 3) Low usability, 4) Harassment and bullying 5) Time-consuming/isolating, 6) Low trust, 7) Over-commercialized, 8) Dissatisfaction with moderators and 9) Unspecified boring. The results, design implications and future research are discussed.", "title": "" }, { "docid": "c0484f3055d7e7db8dfea9d4483e1e06", "text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.", "title": "" }, { "docid": "bc64d6626da03b4cad4c712e186fa476", "text": "Mobile digital technologies and networks have fueled a recent proliferation of opportunities for pervasive play in everyday spaces. In this paper, I examine how players negotiate the boundary between these pervasive games and real life. I trace the emergence of what I call “the Pinocchio effect” – the desire for a game to be transformed into real life, or conversely, for everyday life to be transformed into a \"real little game.” Focusing on two examples of pervasive play – the 2001 immersive game known as the Beast, and the Go Game, an ongoing urban superhero game — I argue that gamers maximize their play experience by performing belief, rather than actually believing, in the permeability of the game-reality boundary.", "title": "" }, { "docid": "86820c43e63066930120fa5725b5b56d", "text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.", "title": "" } ]
scidocsrr
f7b61cb2cffa006e33c7f53cd7dd3421
Ultra-Wideband Phase Shifters
[ { "docid": "54f95cef02818cb4eb86339ee12a8b07", "text": "The problem of discontinuities in broadband multisection coupled-stripline 3-dB directional couplers, phase shifters, high-pass tapered-line 3-dB directional couplers, and magic-T's, regarding the connections of coupled and terminating signal lines, is comprehensively investigated in this paper for the first time. The equivalent circuit of these discontinuities proposed in Part I has been used for accurate modeling of the broadband multisection and ultra-broadband high-pass coupled-stripline circuits. It has been shown that parasitic reactances, which result from the connections of signal and coupled lines, severely deteriorate the return losses and the isolation of such circuits and also-in case of tapered-line directional couplers-the coupling responses. Moreover, it has been proven theoretically and experimentally that these discontinuity effects can be substantially reduced by introducing compensating shunt capacitances in a number of cross sections of coupled and signal lines. Results of measurements carried out for various designed and manufactured coupled-line circuits have been very promising and have proven the efficiency of the proposed broadband compensation technique. The theoretical and measured data are given for the following coupled-stripline circuits: a decade-bandwidth asymmetric three-section 3-dB directional coupler, a decade-bandwidth three-section phase-shifter compensator, and a high-pass asymmetric tapered-line 3-dB coupler", "title": "" } ]
[ { "docid": "02f28b1237b88471b0d96e5ff3871dc4", "text": "Data mining is becoming increasingly important since the size of databases grows even larger and the need to explore hidden rules from the databases becomes widely recognized. Currently database systems are dominated by relational database and the ability to perform data mining using standard SQL queries will definitely ease implementation of data mining. However the performance of SQL based data mining is known to fall behind specialized implementation and expensive mining tools being on sale. In this paper we present an evaluation of SQL based data mining on commercial RDBMS (IBM DB2 UDB EEE). We examine some techniques to reduce I/O cost by using View and Subquery. Those queries can be more than 6 times faster than SETM SQL query reported previously. In addition, we have made performance evaluation on parallel database environment and compared the performance result with commercial data mining tool (IBM Intelligent Miner). We prove that SQL based data mining can achieve sufficient performance by the utilization of SQL query customization and database tuning.", "title": "" }, { "docid": "541ebcc2e081ea1a08bbaba2e9820510", "text": "We present an analytic study on the language of news media in the context of political fact-checking and fake news detection. We compare the language of real news with that of satire, hoaxes, and propaganda to find linguistic characteristics of untrustworthy text. To probe the feasibility of automatic political fact-checking, we also present a case study based on PolitiFact.com using their factuality judgments on a 6-point scale. Experiments show that while media fact-checking remains to be an open research question, stylistic cues can help determine the truthfulness of text.", "title": "" }, { "docid": "639ef3a979e916a6e38b32243235b73a", "text": "Little is known about the specific kinds of questions programmers ask when evolving a code base and how well existing tools support those questions. To better support the activity of programming, answers are needed to three broad research questions: 1) What does a programmer need to know about a code base when evolving a software system? 2) How does a programmer go about finding that information? 3) How well do existing tools support programmers in answering those questions? We undertook two qualitative studies of programmers performing change tasks to provide answers to these questions. In this paper, we report on an analysis of the data from these two user studies. This paper makes three key contributions. The first contribution is a catalog of 44 types of questions programmers ask during software evolution tasks. The second contribution is a description of the observed behavior around answering those questions. The third contribution is a description of how existing deployed and proposed tools do, and do not, support answering programmers' questions.", "title": "" }, { "docid": "10298bbeb9e361b9a841175590c8be7f", "text": "BACKGROUND\nPregnant women with an elevated viral load of hepatitis B virus (HBV) have a risk of transmitting infection to their infants, despite the infants' receiving hepatitis B immune globulin.\n\n\nMETHODS\nIn this multicenter, double-blind clinical trial performed in Thailand, we randomly assigned hepatitis B e antigen (HBeAg)-positive pregnant women with an alanine aminotransferase level of 60 IU or less per liter to receive tenofovir disoproxil fumarate (TDF) or placebo from 28 weeks of gestation to 2 months post partum. Infants received hepatitis B immune globulin at birth and hepatitis B vaccine at birth and at 1, 2, 4, and 6 months. The primary end point was a hepatitis B surface antigen (HBsAg)-positive status in the infant, confirmed by the HBV DNA level at 6 months of age. We calculated that a sample of 328 women would provide the trial with 90% power to detect a difference of at least 9 percentage points in the transmission rate (expected rate, 3% in the TDF group vs. 12% in the placebo group).\n\n\nRESULTS\nFrom January 2013 to August 2015, we enrolled 331 women; 168 women were randomly assigned to the TDF group and 163 to the placebo group. At enrollment, the median gestational age was 28.3 weeks, and the median HBV DNA level was 8.0 log10 IU per milliliter. Among 322 deliveries (97% of the participants), there were 319 singleton births, two twin pairs, and one stillborn infant. The median time from birth to administration of hepatitis B immune globulin was 1.3 hours, and the median time from birth to administration of hepatitis B vaccine was 1.2 hours. In the primary analysis, none of the 147 infants (0%; 95% confidence interval [CI], 0 to 2) in the TDF group were infected, as compared with 3 of 147 (2%; 95% CI, 0 to 6) in the placebo group (P=0.12). The rate of adverse events did not differ significantly between groups. The incidence of a maternal alanine aminotransferase level of more than 300 IU per liter after discontinuation of the trial regimen was 6% in the TDF group and 3% in the placebo group (P=0.29).\n\n\nCONCLUSIONS\nIn a setting in which the rate of mother-to-child HBV transmission was low with the administration of hepatitis B immune globulin and hepatitis B vaccine in infants born to HBeAg-positive mothers, the additional maternal use of TDF did not result in a significantly lower rate of transmission. (Funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development; ClinicalTrials.gov number, NCT01745822 .).", "title": "" }, { "docid": "f4427b472b6e94faadbd49e422ef9200", "text": "Amlinger, L. 2017. The type I-E CRISPR-Cas system. Biology and applications of an adaptive immune system in bacteria. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1466. 61 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9787-3. CRISPR-Cas systems are adaptive immune systems in bacteria and archaea, consisting of a clustered regularly interspaced short palindromic repeats (CRISPR) array and CRISPR associated (Cas) proteins. In this work, the type I-E CRISPR-Cas system of Escherichia coli was studied. CRISPR-Cas immunity is divided into three stages. In the first stage, adaptation, Cas1 and Cas2 store memory of invaders in the CRISPR array as short intervening sequences, called spacers. During the expression stage, the array is transcribed, and subsequently processed into small CRISPR RNAs (crRNA), each consisting of one spacer and one repeat. The crRNAs are bound by the Cascade multi-protein complex. During the interference step, Cascade searches for DNA molecules complementary to the crRNA spacer. When a match is found, the target DNA is degraded by the recruited Cas3 nuclease. Host factors required for integration of new spacers into the CRISPR array were first investigated. Deleting recD, involved in DNA repair, abolished memory formation by reducing the concentration of the Cas1-Cas2 expression plasmid, leading to decreased amounts of Cas1 to levels likely insufficient for spacer integration. Deletion of RecD has an indirect effect on adaptation. To facilitate detection of adaptation, a sensitive fluorescent reporter was developed where an out-of-frame yfp reporter gene is moved into frame when a new spacer is integrated, enabling fluorescent detection of adaptation. Integration can be detected in single cells by a variety of fluorescence-based methods. A second aspect of this thesis aimed at investigating spacer elements affecting target interference. Spacers with predicted secondary structures in the crRNA impaired the ability of the CRISPR-Cas system to prevent transformation of targeted plasmids. Lastly, in absence of Cas3, Cascade was successfully used to inhibit transcription of specific genes by preventing RNA polymerase access to the promoter. The CRISPR-Cas field has seen rapid development since the first demonstration of immunity almost ten years ago. However, much research remains to fully understand these interesting adaptive immune systems and the research presented here increases our understanding of the type I-E CRISPR-Cas system.", "title": "" }, { "docid": "a51a3e1ae86e4d178efd610d15415feb", "text": "The availability of semantically annotated image and video assets constitutes a critical prerequisite for the realisation of intelligent knowledge management services pertaining to realistic user needs. Given the extend of the challenges involved in the automatic extraction of such descriptions, manually created metadata play a significant role, further strengthened by their deployment in training and evaluation tasks related to the automatic extraction of content descriptions. The different views taken by the two main approaches towards semantic content description, namely the Semantic Web and MPEG-7, as well as the traits particular to multimedia content due to the multiplicity of information levels involved, have resulted in a variety of image and video annotation tools, adopting varying description aspects. Aiming to provide a common framework of reference and furthermore to highlight open issues, especially with respect to the coverage and the interoperability of the produced metadata, in this chapter we present an overview of the state of the art in image and video annotation tools.", "title": "" }, { "docid": "0ea92e1f3071ae469cc97e430e4591bb", "text": "Organizations be it private or public often collect personal information about an individual who are their customers or clients. The personal information of an individual is private and sensitive which has to be secured from data mining algorithm which an adversary may apply to get access to the private information. In this paper we have consider the problem of securing these private and sensitive information when used in random forest classifier in the framework of differential privacy. We have incorporated the concept of differential privacy to the classical random forest algorithm. Experimental results shows that quality functions such as information gain, max operator and gini index gives almost equal accuracy regardless of their sensitivity towards the noise. Also the accuracy of the classical random forest and the differential private random forest is almost equal for different size of datasets. The proposed algorithm works for datasets with categorical as well as continuous attributes.", "title": "" }, { "docid": "d59a2c1673d093584c5f19212d6ba520", "text": "Introduction and Motivation Today, a majority of data is fundamentally distributed in nature. Data for almost any task is collected over a broad area, and streams in at a much greater rate than ever before. In particular, advances in sensor technology and miniaturization have led to the concept of the sensor network: a (typically wireless) collection of sensing devices collecting detailed data about their surroundings. A fundamental question arises: how to query and monitor this rich new source of data? Similar scenarios emerge within the context of monitoring more traditional, wired networks, and in other emerging models such as P2P networks and grid-based computing. The prevailing paradigm in database systems has been understanding management of centralized data: how to organize, index, access, and query data that is held centrally on a single machine or a small number of closely linked machines. In these distributed scenarios, the axiom is overturned: now, data typically streams into remote sites at high rates. Here, it is not feasible to collect the data in one place: the volume of data collection is too high, and the capacity for data communication relatively low. For example, in battery-powered wireless sensor networks, the main drain on battery life is communication, which is orders of magnitude more expensive than computation or sensing. This establishes a fundamental concept for distributed stream monitoring: if we can perform more computational work within the network to reduce the communication needed, then we can significantly improve the value of our network, by increasing its useful life and extending the range of computation possible over the network. We consider two broad classes of approaches to such in-network query processing, by analogy to query types in traditional DBMSs. In the one shot model, a query is issued by a user at some site, and must be answered based on the current state of data in the network. We identify several possible approaches to this problem. For simple queries, partial computation of the result over a tree can reduce the data transferred significantly. For “holistic” queries, such as medians, count distinct and so on, clever composable summaries give a compact way to accurately approximate query answers. Lastly, careful modeling of correlations between measurements and other trends in the data can further reduce the number of sensors probed. In the continuous model, a query is placed by a user which re-", "title": "" }, { "docid": "13a4dccde0ae401fc39b50469a0646b6", "text": "The stability theorem for persistent homology is a central result in topological data analysis. While the original formulation of the result concerns the persistence barcodes of R-valued functions, the result was later cast in a more general algebraic form, in the language of persistence modules and interleavings. In this paper, we establish an analogue of this algebraic stability theorem for zigzag persistence modules. To do so, we functorially extend each zigzag persistence module to a two-dimensional persistence module, and establish an algebraic stability theorem for these extensions. One part of our argument yields a stability result for free two-dimensional persistence modules. As an application of our main theorem, we strengthen a result of Bauer et al. on the stability of the persistent homology of Reeb graphs. Our main result also yields an alternative proof of the stability theorem for level set persistent homology of Carlsson et al.", "title": "" }, { "docid": "e0d63b34b6bdd5870cdd42eaa66c6c0f", "text": "This paper highlights the challenges faced due to non-availability of trusted specialized forensic tools for conducting investigation on gaming consoles. We have developed a framework to examine existing state-of-the-art forensic acquisition and analysis tools by exploring their applicability to eighth generation gaming consoles such as the Xbox One and PlayStation 4. The framework is used to validate the acquired images, compare the retrieved artifacts before and after restoring the console to the factory settings, and to conduct network forensics on both devices. The paper reveals the need of specialized forensic tools for forensic analysis of these devices.", "title": "" }, { "docid": "c33452f24bbcfce6d120d3de60813754", "text": "In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input and output [1]. MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [2] classification, COCO object detection [3], VOC image segmentation [4]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters.", "title": "" }, { "docid": "9f1acbd886cdf792fcaeafad9bfdfed3", "text": "In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then “diagnose the problem”, before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web. In this paper, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by technical support scams.", "title": "" }, { "docid": "b8625942315177d2fa1c534e8be5eb9f", "text": "The pantograph-overhead contact wire system is investigated by using an infrared camera. As the pantograph has a vertical motion because of the non-uniform elasticity of the catenary, in order to detect the temperature along the strip from a sequence of infrared images, a segment-tracking algorithm, based on the Hough transformation, has been employed. An analysis of the stored images could help maintenance operations revealing, for example, overheating of the pantograph strip, bursts of arcing, or an irregular positioning of the contact line. Obtained results are relevant for monitoring the status of the quality transmission of the current and for a predictive maintenance of the pantograph and of the catenary system. Examples of analysis from experimental data are reported in the paper.", "title": "" }, { "docid": "ca1005dddee029e92bc50717513a53d0", "text": "Citation recommendation is an interesting but challenging research problem. Most existing studies assume that all papers adopt the same criterion and follow the same behavioral pattern in deciding relevance and authority of a paper. However, in reality, papers have distinct citation behavioral patterns when looking for different references, depending on paper content, authors and target venues. In this study, we investigate the problem in the context of heterogeneous bibliographic networks and propose a novel cluster-based citation recommendation framework, called ClusCite, which explores the principle that citations tend to be softly clustered into interest groups based on multiple types of relationships in the network. Therefore, we predict each query's citations based on related interest groups, each having its own model for paper authority and relevance. Specifically, we learn group memberships for objects and the significance of relevance features for each interest group, while also propagating relative authority between objects, by solving a joint optimization problem. Experiments on both DBLP and PubMed datasets demonstrate the power of the proposed approach, with 17.68% improvement in Recall@50 and 9.57% growth in MRR over the best performing baseline.", "title": "" }, { "docid": "717009da92a43c298afcb48f2ccfc879", "text": "It is known that the learning rate is the most important hyper-parameter to tune for training deep convolutional neural networks (i.e., a “guessing game”). This report describes a new method for setting the learning rate, named cyclical learning rates, that eliminates the need to experimentally find the best values and schedule for the learning rates. Instead of setting the learning rate to fixed values, this method lets the learning rate cyclically vary within reasonable boundary values. This report shows that training with cyclical learning rates achieves near optimal classification accuracy without tuning and often in many fewer iterations. This report also describes a simple way to estimate “reasonable bounds” by linearly increasing the learning rate in one training run of the network for only a few epochs. In addition, cyclical learning rates are demonstrated on training with the CIFAR-10 dataset and the AlexNet and GoogLeNet architectures on the ImageNet dataset. These methods are practical tools for everyone who trains convolutional neural networks.", "title": "" }, { "docid": "d558f980b85bf970a7b57c00df361591", "text": "URL shortener services today have come to play an important role in our social media landscape. They direct user attention and disseminate information in online social media such as Twitter or Facebook. Shortener services typically provide short URLs in exchange for long URLs. These short URLs can then be shared and diffused by users via online social media, e-mail or other forms of electronic communication. When another user clicks on the shortened URL, she will be redirected to the underlying long URL. Shortened URLs can serve many legitimate purposes, such as click tracking, but can also serve illicit behavior such as fraud, deceit and spam. Although usage of URL shortener services today is ubiquituous, our research community knows little about how exactly these services are used and what purposes they serve. In this paper, we study usage logs of a URL shortener service that has been operated by our group for more than a year. We expose the extent of spamming taking place in our logs, and provide first insights into the planetary-scale of this problem. Our results are relevant for researchers and engineers interested in understanding the emerging phenomenon and dangers of spamming via URL shortener services.", "title": "" }, { "docid": "734840224154ef88cdb196671fd3f3f8", "text": "Tiny face detection aims to find faces with high degrees of variability in scale, resolution and occlusion in cluttered scenes. Due to the very little information available on tiny faces, it is not sufficient to detect them merely based on the information presented inside the tiny bounding boxes or their context. In this paper, we propose to exploit the semantic similarity among all predicted targets in each image to boost current face detectors. To this end, we present a novel framework to model semantic similarity as pairwise constraints within the metric learning scheme, and then refine our predictions with the semantic similarity by utilizing the graph cut techniques. Experiments conducted on three widely-used benchmark datasets have demonstrated the improvement over the-state-of-the-arts gained by applying this idea.", "title": "" }, { "docid": "d0e584d00c82df795e1d79bd4837ceb9", "text": "Many observations suggest that typical (emotional or orthostatic) vasovagal syncope (VVS) is not a disease, but rather a manifestation of a non-pathological trait. Some authors have hypothesized this type of syncope as a “defense mechanism” for the organism and a few theories have been postulated. Under the human violent conflicts theory, the VVS evolved during the Paleolithic era only in the human lineage. In this evolutionary period, a predominant cause of death was wounding by a sharp object. This theory could explain the occurrence of emotional VVS, but not of the orthostatic one. The clot production theory suggests that the vasovagal reflex is a defense mechanism against hemorrhage in mammals. This theory could explain orthostatic VVS, but not emotional VVS. The brain self-preservation theory is mainly based on the observation that during tilt testing a decrease in cerebral blood flow often precedes the drop in blood pressure and heart rate. The faint causes the body to take on a gravitationally neutral position, and thereby provides a better chance of restoring brain blood supply. However, a decrease in cerebral blood flow has not been demonstrated during negative emotions, which trigger emotional VVS. Under the heart defense theory, the vasovagal reflex seems to be a protective mechanism against sympathetic overactivity and the heart is the most vulnerable organ during this condition. This appears to be the only unifying theory able to explain the occurrence of the vasovagal reflex and its associated selective advantage, during both orthostatic and emotional stress.", "title": "" }, { "docid": "e4f3a9f89235ed11c4186d9c937a9620", "text": "The human hand is an exceptionally significant part of the human body which has a very complex biological system with bones, joints, and muscles. Among all hand functions, power grasping plays a crucial role in the activities of daily living. In this research a prosthetic terminal device is designed to assist the power grasping activities of amputees subjected to wrist disarticulation. The designed terminal device contains four identical fingers made of a novel linkage mechanism, which can accomplish flexion and extension. With the intention of verifying the effectiveness of the mechanism, kinematic analysis has been carried out. Furthermore, the motion simulation has demonstrated that the mechanism is capable of generating the appropriate finger movements to accomplish cylindrical and spherical power grasps. In addition, the work envelop of the proposed prosthetic finger has been determined. The 3D printed prototype of the finger was experimentally tested. The experimental results validate the effectiveness of the proposed mechanism to gain the expected motion patterns.", "title": "" } ]
scidocsrr
61a1eb0ce584c1a469adc66700ef64a0
Unanimous Prediction for 100% Precision with Application to Learning Semantic Mappings
[ { "docid": "59c24fb5b9ac9a74b3f89f74b332a27c", "text": "This paper addresses the problem of learning to map sentences to logical form, given training data consisting of natural language sentences paired with logical representations of their meaning. Previous approaches have been designed for particular natural languages or specific meaning representations; here we present a more general method. The approach induces a probabilistic CCG grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentences. We use higher-order unification to define a hypothesis space containing all grammars consistent with the training data, and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a log-linear parsing model. Experiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representations.", "title": "" }, { "docid": "6b7daba104f8e691dd32cba0b4d66ecd", "text": "This paper presents the first empirical results to our knowledge on learning synchronous grammars that generate logical forms. Using statistical machine translation techniques, a semantic parser based on a synchronous context-free grammar augmented with λoperators is learned given a set of training sentences and their correct logical forms. The resulting parser is shown to be the bestperforming system so far in a database query domain.", "title": "" } ]
[ { "docid": "8994470e355b5db188090be731ee4fe9", "text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.", "title": "" }, { "docid": "dc3d182f751beffdf4d7814073f6a05c", "text": "Information communication technologies (ICTs) have significantly revolutionized travel industry in the last decade. With an increasing number of travel companies participating in the Internet market, low price has become a minimum qualification to compete in the Internet market. As a result, e-service quality is becoming even more critical for companies to retain and attract customers in the digital age. This study focuses on e-service quality dimensions in the Internet market with an empirical study on online travel service. The purpose of this study is to develop a scale to evaluate e-service quality from the perspectives of both online companies and customers, which provides fresh insight into the dimensions of e-service quality. The results in this study indicate that trust from the perspective of customer and ease of use from the perspective of online company are the most critical and important facets in customers’ perception of online travel service quality, while reliability, system availability and responsiveness have influence on customer’s perception of online travel service quality as well, but the influence is not so strong as that of trust and ease of use. Online travel service companies should pay attention to the facets of reliability, system availability and responsiveness while focusing on the facets of ease of use and trust in order to improve their online travel service quality to customers.", "title": "" }, { "docid": "ab148ea69cf884b2653823b350ed5cfc", "text": "The application of information retrieval techniques to search tasks in software engineering is made difficult by the lexical gap between search queries, usually expressed in natural language (e.g. English), and retrieved documents, usually expressed in code (e.g. programming languages). This is often the case in bug and feature location, community question answering, or more generally the communication between technical personnel and non-technical stake holders in a software project. In this paper, we propose bridging the lexical gap by projecting natural language statements and code snippets as meaning vectors in a shared representation space. In the proposed architecture, word embeddings are first trained on API documents, tutorials, and reference documents, and then aggregated in order to estimate semantic similarities between documents. Empirical evaluations show that the learned vector space embeddings lead to improvements in a previously explored bug localization task and a newly defined task of linking API documents to computer programming questions.", "title": "" }, { "docid": "1fb748012ff900e14861e2b536fbd44c", "text": "This paper describes the use of data mining techniques to solve three important issues in network intrusion detection problems. The first goal is finding the best dimensionality reduction algorithm which reduces the computational cost while still maintains the accuracy. We implement both feature extraction (Principal Component Analysis and Independent Component Analysis) and feature selection (Genetic Algorithm and Particle Swarm Optimization) techniques for dimensionality reduction. The second goal is finding the best algorithm for misuse detection system to detect known intrusion. We implement four basic machine learning algorithms (Naïve Bayes, Decision Tree, Nearest Neighbour and Rule Induction) and then apply ensemble algorithms such as bagging, boosting and stacking to improve the performance of these four basic algorithms. The third goal is finding the best clustering algorithms to detect network anomalies which contains unknown intrusion. We analyze and compare the performance of four unsupervised clustering algorithms (k-Means, k-Medoids, EM clustering and distance-based outlier detection) in terms of accuracy and false positives. Our experiment shows that the Nearest Neighbour (NN) classifier when implemented with Particle Swarm Optimization (PSO) as an attribute selection algorithm achieved the best performance, which is 99.71% accuracy and 0.27% false positive. The misuse detection technique achieves a very good performance with more than 99% accuracy when detecting known intrusion but it fails to accurately detect data set with a large number of unknown intrusions where the highest accuracy is only 63.97%. In contrast, the anomaly detection approach shows promising results where the distance-based outlier detection method outperforms the other three clustering algorithms with the accuracy of 80.15%, followed by EM clustering (78.06%), k-Medoids (76.71%), improved k-Means (65.40%) and k-Means (57.81%).", "title": "" }, { "docid": "5772e4bfb9ced97ff65b5fdf279751f4", "text": "Deep convolutional neural networks excel at sentiment polarity classification, but tend to require substantial amounts of training data, which moreover differs quite significantly between domains. In this work, we present an approach to feed generic cues into the training process of such networks, leading to better generalization abilities given limited training data. We propose to induce sentiment embeddings via supervision on extrinsic data, which are then fed into the model via a dedicated memorybased component. We observe significant gains in effectiveness on a range of different datasets in seven different languages.", "title": "" }, { "docid": "44bbc67f44f4f516db97b317ae16a22a", "text": "Although the number of occupational therapists working in mental health has dwindled, the number of people who need our services has not. In our tendency to cling to a medical model of service provision, we have allowed the scope and content of our services to be limited to what has been supported within this model. A social model that stresses functional adaptation within the community, exemplified in psychosocial rehabilitation, offers a promising alternative. A strongly proactive stance is needed if occupational therapists are to participate fully. Occupational therapy can survive without mental health specialists, but a large and deserving population could ultimately be deprived of a valuable service.", "title": "" }, { "docid": "c98d0b262c76dee61b6f9923b1a246da", "text": "A variety of methods for camera calibration, relying on different camera models, algorithms and a priori object information, have been reported and reviewed in literature. Use of simple 2D patterns of the chess-board type represents an interesting approach, for which several ‘calibration toolboxes’ are available on the Internet, requiring varying degrees of human interaction. This paper presents an automatic multi-image approach exclusively for camera calibration purposes on the assumption that the imaged pattern consists of adjacent light and dark squares of equal size. Calibration results, also based on image sets from Internet sources, are viewed as satisfactory and comparable to those from other approaches. Questions regarding the role of image configuration need further investigation.", "title": "" }, { "docid": "558082c8d15613164d586cab0ba04d9c", "text": "One of the potential benefits of distributed systems is their use in providing highly-available services that are likely to be usable when needed. Availabilay is achieved through replication. By having inore than one copy of information, a service continues to be usable even when some copies are inaccessible, for example, because of a crash of the computer where a copy was stored. This paper presents a new replication algorithm that has desirable performance properties. Our approach is based on the primary copy technique. Computations run at a primary. which notifies its backups of what it has done. If the primary crashes, the backups are reorganized, and one of the backups becomes the new primary. Our method works in a general network with both node crashes and partitions. Replication causes little delay in user computations and little information is lost in a reorganization; we use a special kind of timestamp called a viewstamp to detect lost information.", "title": "" }, { "docid": "e4db0ee5c4e2a5c87c6d93f2f7536f15", "text": "Despite the importance of sparsity in many big data applications, there are few existing methods for efficient distributed optimization of sparsely-regularized objectives. In this paper, we present a communication-efficient framework for L1-regularized optimization in distributed environments. By taking a nontraditional view of classical objectives as part of a more general primal-dual setting, we obtain a new class of methods that can be efficiently distributed and is applicable to common L1-regularized regression and classification objectives, such as Lasso, sparse logistic regression, and elastic net regression. We provide convergence guarantees for this framework and demonstrate strong empirical performance as compared to other stateof-the-art methods on several real-world distributed datasets.", "title": "" }, { "docid": "d7907565c4ea6782cdb0c7b281a9d636", "text": "Acute appendicitis (AA) is among the most common cause of acute abdominal pain. Diagnosis of AA is challenging; a variable combination of clinical signs and symptoms has been used together with laboratory findings in several scoring systems proposed for suggesting the probability of AA and the possible subsequent management pathway. The role of imaging in the diagnosis of AA is still debated, with variable use of US, CT and MRI in different settings worldwide. Up to date, comprehensive clinical guidelines for diagnosis and management of AA have never been issued. In July 2015, during the 3rd World Congress of the WSES, held in Jerusalem (Israel), a panel of experts including an Organizational Committee and Scientific Committee and Scientific Secretariat, participated to a Consensus Conference where eight panelists presented a number of statements developed for each of the eight main questions about diagnosis and management of AA. The statements were then voted, eventually modified and finally approved by the participants to The Consensus Conference and lately by the board of co-authors. The current paper is reporting the definitive Guidelines Statements on each of the following topics: 1) Diagnostic efficiency of clinical scoring systems, 2) Role of Imaging, 3) Non-operative treatment for uncomplicated appendicitis, 4) Timing of appendectomy and in-hospital delay, 5) Surgical treatment 6) Scoring systems for intra-operative grading of appendicitis and their clinical usefulness 7) Non-surgical treatment for complicated appendicitis: abscess or phlegmon 8) Pre-operative and post-operative antibiotics.", "title": "" }, { "docid": "6b698146f5fbd2335e3d7bdfd39e8e4f", "text": "Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis, by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function, these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state, which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step toward an algorithmic theory of neural computation.", "title": "" }, { "docid": "ef84f7f53b60cf38972ff1eb04d0f6a5", "text": "OBJECTIVE\nThe purpose of this prospective study was to evaluate the efficacy and safety of screw fixation without bone fusion for unstable thoracolumbar and lumbar burst fracture.\n\n\nMETHODS\nNine patients younger than 40 years underwent screw fixation without bone fusion, following postural reduction using a soft roll at the involved vertebra, in cases of burst fracture. Their motor power was intact in spite of severe canal compromise. The surgical procedure included postural reduction for 3 days and screw fixations at one level above, one level below and at the fractured level itself. The patients underwent removal of implants 12 months after the initial operation, due to possibility of implant failure. Imaging and clinical findings, including canal encroachment, vertebral height, clinical outcome, and complications were analyzed.\n\n\nRESULTS\nPrior to surgery, the mean pain score (visual analogue scale) was 8.2, which decreased to 2.2 at 12 months after screw fixation. None of the patients complained of worsening of pain during 6 months after implant removal. All patients were graded as having excellent or good outcomes at 6 months after implant removal. The proportion of canal compromise at the fractured level improved from 55% to 35% at 12 months after surgery. The mean preoperative vertebral height loss was 45.3%, which improved to 20.6% at 6 months after implant removal. There were no neurological deficits related to neural injury. The improved vertebral height and canal compromise were maintained at 6 months after implant removal.\n\n\nCONCLUSION\nShort segment pedicle screw fixation, including fractured level itself, without bone fusion following postural reduction can be an effective and safe operative technique in the management of selected young patients suffering from unstable burst fracture.", "title": "" }, { "docid": "4419d61684dff89f4678afe3b8dc06e0", "text": "Reason and emotion have long been considered opposing forces. However, recent psychological and neuroscientific research has revealed that emotion and cognition are closely intertwined. Cognitive processing is needed to elicit emotional responses. At the same time, emotional responses modulate and guide cognition to enable adaptive responses to the environment. Emotion determines how we perceive our world, organise our memory, and make important decisions. In this review, we provide an overview of current theorising and research in the Affective Sciences. We describe how psychological theories of emotion conceptualise the interactions of cognitive and emotional processes. We then review recent research investigating how emotion impacts our perception, attention, memory, and decision-making. Drawing on studies with both healthy participants and clinical populations, we illustrate the mechanisms and neural substrates underlying the interactions of cognition and emotion.", "title": "" }, { "docid": "679eb46c45998897b4f8e641530f44a7", "text": "Workers in hazardous environments such as mining are constantly exposed to the health and safety hazards of dynamic and unpredictable conditions. One approach to enable them to manage these hazards is to provide them with situational awareness: real-time data (environmental, physiological, and physical location data) obtained from wireless, wearable, smart sensor technologies deployed at the work area. The scope of this approach is limited to managing the hazards of the immediate work area for prevention purposes; it does not include technologies needed after a disaster. Three critical technologies emerge and converge to support this technical approach: smart-wearable sensors, wireless sensor networks, and low-power embedded computing. The major focus of this report is on smart sensors and wireless sensor networks. Wireless networks form the infrastructure to support the realization of situational awareness; therefore, there is a significant focus on wireless networks. Lastly, the “Future Research” section pulls together the three critical technologies by proposing applications that are relevant to mining. The applications are injured miner (person-down) detection; a wireless, wearable remote viewer; and an ultrawide band smart environment that enables localization and tracking of humans and resources. The smart environment could provide location data, physiological data, and communications (video, photos, graphical images, audio, and text messages). Electrical engineer, Pittsburgh Research Laboratory, National Institute for Occupational Safety and Health, Pittsburgh, PA. President, The Designer-III Co., Franklin, PA. General engineer, Pittsburgh Research Laboratory (now with the National Personal Protective Technology Laboratory), National Institute for Occupational Safety and Health, Pittsburgh, PA. Supervisory general engineer, Pittsburgh Research Laboratory, National Institute for Occupational Safety and Health, Pittsburgh, PA.", "title": "" }, { "docid": "0c31ad159095de6057d43534199e1e45", "text": "We present a novel spatial hashing based data structure to facilitate 3D shape analysis using convolutional neural networks (CNNs). Our method builds hierarchical hash tables for an input model under different resolutions that leverage the sparse occupancy of 3D shape boundary. Based on this data structure, we design two efficient GPU algorithms namely hash2col and col2hash so that the CNN operations like convolution and pooling can be efficiently parallelized. The perfect spatial hashing is employed as our spatial hashing scheme, which is not only free of hash collision but also nearly minimal so that our data structure is almost of the same size as the raw input. Compared with existing 3D CNN methods, our data structure significantly reduces the memory footprint during the CNN training. As the input geometry features are more compactly packed, CNN operations also run faster with our data structure. The experiment shows that, under the same network structure, our method yields comparable or better benchmark results compared with the state-of-the-art while it has only one-third memory consumption when under high resolutions (i.e. 256 3).", "title": "" }, { "docid": "d11a113fdb0a30e2b62466c641e49d6d", "text": "Apache Spark has emerged as the de facto framework for big data analytics with its advanced in-memory programming model and upper-level libraries for scalable machine learning, graph analysis, streaming and structured data processing. It is a general-purpose cluster computing framework with language-integrated APIs in Scala, Java, Python and R. As a rapidly evolving open source project, with an increasing number of contributors from both academia and industry, it is difficult for researchers to comprehend the full body of development and research behind Apache Spark, especially those who are beginners in this area. In this paper, we present a technical review on big data analytics using Apache Spark. This review focuses on the key components, abstractions and features of Apache Spark. More specifically, it shows what Apache Spark has for designing and implementing big data algorithms and pipelines for machine learning, graph analysis and stream processing. In addition, we highlight some research and development directions on Apache Spark for big data analytics.", "title": "" }, { "docid": "9498afdb0db4d7f82187cd4a6af5ed36", "text": "”Bitcoin is a rare case where practice seems to be ahead of theory.” Joseph Bonneau et al. [15] This tutorial aims to further close the gap between IT security research and the area of cryptographic currencies and block chains. We will describe and refer to Bitcoin as an example throughout the tutorial, as it is the most prominent representative of a such a system. It also is a good reference to discuss the underlying block chain mechanics which are the foundation of various altcoins (e.g. Namecoin) and other derived systems. In this tutorial, the topic of cryptographic currencies is solely addressed from a technical IT security point-of-view. Therefore we do not cover any legal, sociological, financial and economical aspects. The tutorial is designed for participants with a solid IT security background but will not assume any prior knowledge on cryptographic currencies. Thus, we will quickly advance our discussion into core aspects of this field.", "title": "" }, { "docid": "42e2a8b8c1b855fba201e3421639d80d", "text": "Fraudulent behaviors in Google’s Android app market fuel search rank abuse and malware proliferation. We present FairPlay, a novel system that uncovers both malware and search rank fraud apps, by picking out trails that fraudsters leave behind. To identify suspicious apps, FairPlay’s PCF algorithm correlates review activities and uniquely combines detected review relations with linguistic and behavioral signals gleaned from longitudinal Google Play app data. We contribute a new longitudinal app dataset to the community, which consists of over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year. FairPlay achieves over 95% accuracy in classifying gold standard datasets of malware, fraudulent and legitimate apps. We show that 75% of the identified malware apps engage in search rank fraud. FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncer’s detection technology, and reveals a new type of attack campaign, where users are harassed into writing positive reviews, and install and review other apps.", "title": "" }, { "docid": "b5c64ddf3be731a281072a21700a85ee", "text": "This paper addresses the problem of joint detection and recounting of abnormal events in videos. Recounting of abnormal events, i.e., explaining why they are judged to be abnormal, is an unexplored but critical task in video surveillance, because it helps human observers quickly judge if they are false alarms or not. To describe the events in the human-understandable form for event recounting, learning generic knowledge about visual concepts (e.g., object and action) is crucial. Although convolutional neural networks (CNNs) have achieved promising results in learning such concepts, it remains an open question as to how to effectively use CNNs for abnormal event detection, mainly due to the environment-dependent nature of the anomaly detection. In this paper, we tackle this problem by integrating a generic CNN model and environment-dependent anomaly detectors. Our approach first learns CNN with multiple visual tasks to exploit semantic information that is useful for detecting and recounting abnormal events. By appropriately plugging the model into anomaly detectors, we can detect and recount abnormal events while taking advantage of the discriminative power of CNNs. Our approach outperforms the state-of-the-art on Avenue and UCSD Ped2 benchmarks for abnormal event detection and also produces promising results of abnormal event recounting.", "title": "" }, { "docid": "fdd59ff419b9613a1370babe64ef1c98", "text": "The disentangling problem is to discover multiple complex factors of variations hidden in data. One recent approach is to take a dataset with grouping structure and separately estimate a factor common within a group (content) and a factor specific to each group member (transformation). Notably, this approach can learn to represent a continuous space of contents, which allows for generalization to data with unseen contents. In this study, we aim at cultivating this approach within probabilistic deep generative models. Motivated by technical complication in existing groupbased methods, we propose a simpler probabilistic method, called group-contrastive variational autoencoders. Despite its simplicity, our approach achieves reasonable disentanglement with generalizability for three grouped datasets of 3D object images. In comparison with a previous model, although conventional qualitative evaluation shows little difference, our qualitative evaluation using few-shot classification exhibits superior performances for some datasets. We analyze the content representations from different methods and discuss their transformation-dependency and potential performance impacts.", "title": "" } ]
scidocsrr
8056113af74e00221e93b42807d39293
Pretarsal roll augmentation with dermal hyaluronic acid filler injection
[ { "docid": "bfd7c204dec258679e15ce477df04cad", "text": "Clarification is needed regarding the definitions and classification of groove and hollowness of the infraorbital region depending on the cause, anatomical characteristics, and appearance. Grooves in the infraorbital region can be classified as nasojugal grooves (or folds), tear trough deformities, and palpebromalar grooves; these can be differentiated based on anatomical characteristics. They are caused by the herniation of intraorbital fat, atrophy of the skin and subcutaneous fat, contraction of the orbital part of the orbicularis oculi muscle or squinting, and malar bone resorption. Safe and successful treatment requires an optimal choice of filler and treatment method. The choice between a cannula and needle depends on various factors; a needle is better for injections into a subdermal area in a relatively safe plane, while a cannula is recommended for avoiding vascular compromise when injecting filler into a deep fat layer and releasing fibrotic ligamentous structures. The injection of a soft-tissue filler into the subcutaneous fat tissue is recommended for treating mild indentations around the orbital rim and nasojugal region. Reducing the tethering effect of ligamentous structures by undermining using a cannula prior to the filler injection is recommended for treating relatively deep and fine indentations. The treatment of mild prolapse of the intraorbital septal fat or broad flattening of the infraorbital region can be improved by restoring the volume deficiency using a relatively firm filler.", "title": "" }, { "docid": "e9daa1cecacb0bbd69a7dc074bb7764d", "text": "The gross anatomy of the lower eyelid is analogous to that of the upper eyelid, however, the lower eyelid has a more simplified structure with less dynamic movement. Common malpositions of the lower eyelid include entropion and ectropion, rehabilitative surgery of which requires a thorough understanding of lower eyelid anatomy. Furthermore, precise anatomic knowledge is a prerequisite for both reconstructive and cosmetic lower eyelid surgery in order for it to be performed appropriately. In this review, we present the clinical anatomy of the structures of the lower eyelid, as well as highlighting relevant surgical implications. Featured here are the structure of the different eyelid lamellae, the lower eyelid retractors and their relations, the orbital septum, fat pad compartments, and Lockwood ligament.", "title": "" } ]
[ { "docid": "bef86730221684b8e9236cb44179b502", "text": "secure software. In order to find the real-life issues, this case study was initiated to investigate whether the existing FDD can withstand requirements change and software security altogether. The case study was performed in controlled environment – in a course called Application Development—a four credit hours course at UTM. The course began by splitting up the class to seven software development groups and two groups were chosen to implement the existing process of FDD. After students were given an introduction to FDD, they started to adapt the processes to their proposed system. Then students were introduced to the basic concepts on how to make software systems secure. Though, they were still new to security and FDD, however, this study produced a lot of interest among the students. The students seemed to enjoy the challenge of creating secure system using FDD model.", "title": "" }, { "docid": "3d310295592775bbe785692d23649c56", "text": "BACKGROUND\nEvidence indicates that sexual assertiveness is one of the important factors affecting sexual satisfaction. According to some studies, traditional gender norms conflict with women's capability in expressing sexual desires. This study examined the relationship between gender roles and sexual assertiveness in married women in Mashhad, Iran.\n\n\nMETHODS\nThis cross-sectional study was conducted on 120 women who referred to Mashhad health centers through convenient sampling in 2014-15. Data were collected using Bem Sex Role Inventory (BSRI) and Hulbert index of sexual assertiveness. Data were analyzed using SPSS 16 by Pearson and Spearman's correlation tests and linear Regression Analysis.\n\n\nRESULTS\nThe mean scores of sexual assertiveness was 54.93±13.20. According to the findings, there was non-significant correlation between Femininity and masculinity score with sexual assertiveness (P=0.069 and P=0.080 respectively). Linear regression analysis indicated that among the predictor variables, only Sexual function satisfaction was identified as the sexual assertiveness summary predictor variables (P=0.001).\n\n\nCONCLUSION\nBased on the results, sexual assertiveness in married women does not comply with gender role, but it is related to Sexual function satisfaction. So, counseling psychologists need to consider this variable when designing intervention programs for modifying sexual assertiveness and find other variables that affect sexual assertiveness.", "title": "" }, { "docid": "bed9b5a75f79d921444feba4400c9846", "text": "Clustering algorithms have successfully been applied as a digital image segmentation technique in various fields and applications. However, those clustering algorithms are only applicable for specific images such as medical images, microscopic images etc. In this paper, we present a new clustering algorithm called Image segmentation using K-mean clustering for finding tumor in medical application which could be applied on general images and/or specific images (i.e., medical and microscopic images), captured using MRI, CT scan, etc. The algorithm employs the concepts of fuzziness and belongingness to provide a better and more adaptive clustering process as compared to several conventional clustering algorithms.", "title": "" }, { "docid": "57e7635cb3bda615a1566a883d781149", "text": "The aim of this work is to propose a fusion procedure based on lidar and camera to solve the pedestrian detection problem in autonomous driving. Current pedestrian detection algorithms have focused on improving the discriminability of 2D features that capture the pedestrian appearance, and on using various classifier architectures. However, less focus on exploiting the 3D structure of object has limited the pedestrian detection performance and practicality. To tackle these issues, a lidar subsystem is applied here in order to extract object structure features and train a SVM classifier, reducing the number of candidate windows that are tested by a state-of-the-art pedestrian appearance classifier. Additionally, we propose a probabilistic framework to fuse pedestrian detection given by both subsystems. With the proposed framework, we have achieved state-of-the-art performance at 20 fps on our own pedestrian dataset gathered in a challenging urban scenario.", "title": "" }, { "docid": "78a104485843c3940a364719e7a22d18", "text": "We present a simple and generic way to reason about name binding. Name binding is an essential component of every nontrivial programming language, matching uses of names, references, with the things that they name, declarations, based on scoping rules defined by the language. The definition of name binding is often entangled with the language-specific details, which makes abstract and comparative analysis of competing designs challenging. We present a framework that allows to abstract the fundamental notions of references, declarations, and scopes, and to express scoping rules in terms of four scope combinators and three properties of a specific programming language encapsulated in a concept named Language. Using this framework, we clarify complex scoping rules like argument-dependent lookup in C++, investigate the implications of the concepts feature for C++, and introduce a novel scoping rule named weak hiding. In an ideal world, specifications could be formulated based on our framework, and compilers could use such formulation to unambiguously implement name binding. While our examples are primarily centered around C++ and lexical scoping, our framework has applications in other languages and dynamic scoping.", "title": "" }, { "docid": "9ece8dd1905fe0cba49d0fa8c1b21c62", "text": "This paper describes the origins and history of multiple resource theory in accounting for di€ erences in dual task interference. One particular application of the theory, the 4-dimensional multiple resources model, is described in detail, positing that there will be greater interference between two tasks to the extent that they share stages (perceptual/cognitive vs response) sensory modalities (auditory vs visual), codes (visual vs spatial) and channels of visual information (focal vs ambient). A computational rendering of this model is then presented. Examples are given of how the model predicts interference di€ erences in operational environments. Finally, three challenges to the model are outlined regarding task demand coding, task allocation and visual resource competition.", "title": "" }, { "docid": "3f51e8669da7e10204f8c952f5a2bb67", "text": "In the past few years, bully and aggressive posts on social media have grown significantly, causing serious consequences for victims/users of all demographics. Majority of the work in this field has been done for English only. In this paper, we introduce a deep learning based classification system for Facebook posts and comments of Hindi-English Code-Mixed text to detect the aggressive behaviour of/towards users. Our work focuses on text from users majorly in the Indian Subcontinent. The dataset that we used for our models is provided by TRAC-11 in their shared task. Our classification model assigns each Facebook post/comment to one of the three predefined categories: “Overtly Aggressive”, “Covertly Aggressive” and “Non-Aggressive”. We experimented with 6 classification models and our CNN model on a 10 K-fold crossvalidation gave the best result with the prediction accuracy of 73.2%.", "title": "" }, { "docid": "744162ac558f212f73327ba435c1d578", "text": "Massive classification, a classification task defined over a vast number of classes (hundreds of thousands or even millions), has become an essential part of many real-world systems, such as face recognition. Existing methods, including the deep networks that achieved remarkable success in recent years, were mostly devised for problems with a moderate number of classes. They would meet with substantial difficulties, e.g. excessive memory demand and computational cost, when applied to massive problems. We present a new method to tackle this problem. This method can efficiently and accurately identify a small number of “active classes” for each mini-batch, based on a set of dynamic class hierarchies constructed on the fly. We also develop an adaptive allocation scheme thereon, which leads to a better tradeoff between performance and cost. On several large-scale benchmarks, our method significantly reduces the training cost and memory demand, while maintaining competitive performance.", "title": "" }, { "docid": "0d3f9a639b44cf07ce64b689a4dd0a3f", "text": "This article analyses long-term innovation policies and development trajectories of four renewable energy technologies: wind energy, biomass, fuel cells and hydrogen, and photovoltaics. These trajectories and policies are characterised by many costly failures, setbacks, hypedisappointment cycles, tensions, and struggles. Although setbacks and non-linearities are a normal part of innovation journeys, a comparative analysis of four cases shows the recurrence of particular problems. Using Strategic Niche Management as analytical approach, we conclude that major problems exist with regard to learning processes (too much technology-push, focused on R&D), social networks (supply side oriented, narrow, closed) and expectations (hype-disappointment cycles, limited competence to assess promises).", "title": "" }, { "docid": "f7c508743dd08264f86c2eb159f735c2", "text": "Non-negative matrix factorization (NMF) approximates a non-negative matrix X by a product of two non-negative low-rank factor matrices W and H . NMF and its extensions minimize either the Kullback-Leibler divergence or the Euclidean distance between X and WH to model the Poisson noise or the Gaussian noise. In practice, when the noise distribution is heavy tailed, they cannot perform well. This paper presents Manhattan NMF (MahNMF) which minimizes the Manhattan distance between X and WH for modeling the heavy tailed Laplacian noise. Similar to sparse and low-rank matrix decompositions, e.g. robust principal component analysis (RPCA) and GoDec, MahNMF robustly estimates the low-rank part and the sparse part of a non-negative matrix and thus performs effectively when data are contaminated by outliers. We extend MahNMF for various practical applications by developing box-constrained MahNMF, manifold regularized MahNMF, group sparse MahNMF, elastic net inducing MahNMF, and symmetric MahNMF. The major contribution of this paper lies in two fast optimization algorithms for MahNMF and its extensions: the rank-one residual iteration (RRI) method and Nesterov’s smoothing method. In particular, by approximating the residual matrix by the outer product of one row of W and one row ofH in MahNMF, we develop an RRI method to iteratively update each variable of W and H in a closed form solution. Although RRI is efficient for small scale MahNMF and some of its extensions, it is neither scalable to large scale matrices nor flexible enough to optimize all MahNMF extensions. Since the objective functions of MahNMF and its extensions are neither convex nor smooth, we apply Nesterov’s smoothing c ©2012 N. Guan, D. Tao, Z. Luo, and J. Shawe-Taylor. N. Guan, D. Tao, Z. Luo, and J. Shawe-Taylor method to recursively optimize one factor matrix with another matrix fixed. By setting the smoothing parameter inversely proportional to the iteration number, we improve the approximation accuracy iteratively for both MahNMF and its extensions. We conduct experiments on both synthetic and real-world datasets, such as face images, natural scene images, surveillance videos and multi-model datasets, to show the efficiency of the proposed Nesterov’s smoothing method-based algorithm for solving MahNMF and its variants, and the effectiveness of MahNMF and its variants, by comparing them with traditional NMF, RPCA, and GoDec.", "title": "" }, { "docid": "32c17e821ba1311be2b18d0303b2d1a3", "text": "We consider the problem of improving the efficiency of random ized Fourier feature maps to accelerate training and testing speed of kernel methods on large dat asets. These approximate feature maps arise as Monte Carlo approximations to integral representations of shift-invariant kernel functions (e.g., Gaussian kernel). In this paper, we propose to use Quasi-Monte Carlo(QMC) approximations instead, where the relevant integrands are evaluated on a low-discrepancy sequence of points as opposed to random point sets as in the Monte Carlo approach. We derive a new disc repancy measure called box discrepancy based on theoretical characterizations of the integration error with respect to a given sequence. We then propose to learn QMC sequences adapted to our setting based o n explicit box discrepancy minimization. Our theoretical analyses are complemented with empirical r esults that demonstrate the effectiveness of classical and adaptive QMC techniques for this problem.", "title": "" }, { "docid": "7d5556e2bfd8ca3dbc5817e9575148fc", "text": "We present in this paper a calibration program that controls a calibration board integrated in a Smart Electrical Energy Meter (SEEM). The “SEEM” allows to measure the energy from a single phase line and transmits the value of this energy to a central through a wireless network. The “SEEM” needs to be calibrated in only one point of load to correct the gain and compensate the phase added by the system of measure. Since the calibration is performed for one point of load, this reduces the material used, therefore reduces the cost. Furthermore, the calibration of gain and phase is performed simultaneously which decrease the time of this operation.", "title": "" }, { "docid": "2dc261ab24914dd3f865b8ede5b71be9", "text": "Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [16]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014.", "title": "" }, { "docid": "0f80933b5302bd6d9595234ff8368ac4", "text": "We show how a simple convolutional neural network (CNN) can be trained to accurately and robustly regress 6 degrees of freedom (6DoF) 3D head pose, directly from image intensities. We further explain how this FacePoseNet (FPN) can be used to align faces in 2D and 3D as an alternative to explicit facial landmark detection for these tasks. We claim that in many cases the standard means of measuring landmark detector accuracy can be misleading when comparing different face alignments. Instead, we compare our FPN with existing methods by evaluating how they affect face recognition accuracy on the IJB-A and IJB-B benchmarks: using the same recognition pipeline, but varying the face alignment method. Our results show that (a) better landmark detection accuracy measured on the 300W benchmark does not necessarily imply better face recognition accuracy. (b) Our FPN provides superior 2D and 3D face alignment on both benchmarks. Finally, (c), FPN aligns faces at a small fraction of the computational cost of comparably accurate landmark detectors. For many purposes, FPN is thus a far faster and far more accurate face alignment method than using facial landmark detectors.", "title": "" }, { "docid": "c49ae120bca82ef0d9e94115ad7107f2", "text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: Tracy.L.Kijewski.1@nd.edu 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556", "title": "" }, { "docid": "8bd619e8d1816dd5c692317a8fb8e0ed", "text": "The data mining field in computer science specializes in extracting implicit information that is distributed across the stored data records and/or exists as associations among groups of records. Criminal databases contain information on the crimes themselves, the offenders, the victims as well as the vehicles that were involved in the crime. Among these records lie groups of crimes that can be attributed to serial criminals who are responsible for multiple criminal offenses and usually exhibit patterns in their operations, by specializing in a particular crime category (i.e., rape, murder, robbery, etc.), and applying a specific method for implementing their crimes. Discovering serial criminal patterns in crime databases is, in general, a clustering activity in the area of data mining that is concerned with detecting trends in the data by classifying and grouping similar records. In this paper, we report on the different statistical and neural network approaches to the clustering problem in data mining in general, and as it applies to our crime domain in particular. We discuss our approach of using a cascaded network of Kohonen neural networks followed by heuristic processing of the networks outputs that best simulated the experts in the field. We address the issues in this project and the reasoning behind this approach, including: the choice of neural networks, in general, over statistical algorithms as the main tool, and the use of Kohonen networks in particular, the choice for the cascaded approach instead of the direct approach, and the choice of a heuristics subsystem as a back-end subsystem to the neural networks. We also report on the advantages of this approach over both the traditional approach of using a single neural network to accommodate all the attributes, and that of applying a single clustering algorithm on all the data attributes.", "title": "" }, { "docid": "81a44de6f529f09e78ade5384c9b1527", "text": "Code Blue is an emergency code used in hospitals to indicate when a patient goes into cardiac arrest and needs resuscitation. When Code Blue is called, an on-call medical team staffed by physicians and nurses is paged and rushes in to try to save the patient's life. It is an intense, chaotic, and resource-intensive process, and despite the considerable effort, survival rates are still less than 20% [4]. Research indicates that patients actually start showing clinical signs of deterioration some time before going into cardiac arrest [1][2[][3], making early prediction, and possibly intervention, feasible. In this paper, we describe our work, in partnership with NorthShore University HealthSystem, that preemptively flags patients who are likely to go into cardiac arrest, using signals extracted from demographic information, hospitalization history, vitals and laboratory measurements in patient-level electronic medical records. We find that early prediction of Code Blue is possible and when compared with state of the art existing method used by hospitals (MEWS - Modified Early Warning Score)[4], our methods perform significantly better. Based on these results, this system is now being considered for deployment in hospital settings.", "title": "" }, { "docid": "5bc7e46eedc9b525d36c72169eea8a3e", "text": "Training object class detectors typically requires a large set of images in which objects are annotated by boundingboxes. However, manually drawing bounding-boxes is very time consuming. We propose a new scheme for training object detectors which only requires annotators to verify bounding-boxes produced automatically by the learning algorithm. Our scheme iterates between re-training the detector, re-localizing objects in the training images, and human verification. We use the verification signal both to improve re-training and to reduce the search space for re-localisation, which makes these steps different to what is normally done in a weakly supervised setting. Extensive experiments on PASCAL VOC 2007 show that (1) using human verification to update detectors and reduce the search space leads to the rapid production of high-quality bounding-box annotations, (2) our scheme delivers detectors performing almost as good as those trained in a fully supervised setting, without ever drawing any bounding-box, (3) as the verification task is very quick, our scheme substantially reduces total annotation time by a factor 6×-9×.", "title": "" }, { "docid": "45b70b0b163faae47cfaaba2d2feefd1", "text": "Energy saving and prolonging mileage are very important for battery-operated electric vehicles (BEV). For saving energy in BEV's the key parts are regenerative braking performances. Permanent magnet DC (PMDC) motor based regenerative braking can be a solution to improve energy saving efficiency in BEV. In this paper, a novel regenerative braking mechanism based on PMDC motor is proposed. Based on proposed method braking can be achieved by applying different armature voltage from a battery bank without using additional converter with complex switching technique, ultra capacitor, or complex winding-changeover. An experimental setup has been used to evaluate the performance of the proposed braking system. Simulated results prove that the proposed regenerative braking technique is feasible and effective. Also this research provides simplest system for regenerative braking using PMDC motor to improve the mileage of electric vehicles.", "title": "" }, { "docid": "d00957d93af7b2551073ba84b6c0f2a6", "text": "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn", "title": "" } ]
scidocsrr
1201828e9489efc730dd6894a3437c29
Incentive Compatibility of Bitcoin Mining Pool Reward Functions
[ { "docid": "7ab8ccfbc6cff2804cf003c2e684c8f5", "text": "In this paper we describe the various scoring systems used to calculate rewards of participants in Bitcoin pooled mining, explain the problems each were designed to solve and analyze their respective advantages and disadvantages.", "title": "" } ]
[ { "docid": "9e6838b0fb9fc2d6b8ea541260a0e4cf", "text": "In order to achieve better collecting consumption, diagnostic and status of water, natural gas and electricity metering, an electronic device known as smart meter is introduced. These devices are increasingly installed around the globe and together with Automatic Meter Reading (AMR) technology form the basis of future intelligent metering. Devices known as concentrators collect consumption records from smart meters and send them for further processing and analysis. This paper describes the implementation and analysis of one universal electronic device that can be used as concentrator, gateway or both. Implemented device has been tested in real conditions with a smart gas meters. Meter-Bus (M-Bus) standards were discussed and how they define the structure of modern gas metering system. Special analysis is carried out about the range of communication and the impact of the place of installation of the concentrator and smart meters.", "title": "" }, { "docid": "36f960b37e7478d8ce9d41d61195f83a", "text": "An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives au explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, sphericat-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than sphericalinterpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamformmg and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method.", "title": "" }, { "docid": "a854ee8cf82c4bd107e93ed0e70ee543", "text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.", "title": "" }, { "docid": "1c5ab22135bb293919022585bae160ef", "text": "Job satisfaction and employee performance has been a topic of research for decades. Whether job satisfaction influences employee satisfaction in organizations remains a crucial issue to managers and psychologists. That is where the problem lies. Therefore, the objective of this paper is to trace the relationship between job satisfaction and employee performance in organizations with particular reference to Nigeria. Related literature on the some theories of job satisfaction such as affective events, two-factor, equity and job characteristics was reviewed and findings from these theories indicate that a number of factors like achievement, recognition, responsibility, pay, work conditions and so on, have positive influence on employee performance in organizations. The paper adds to the theoretical debate on whether job satisfaction impacts positively on employee performance. It concludes that though the concept of job satisfaction is complex, using appropriate variables and mechanisms can go a long way in enhancing employee performance. It recommends that managers should use those factors that impact employee performance to make them happy, better their well being and the environment. It further specifies appropriate mechanisms using a theoretical approach to support empirical approaches which often lack clarity as to why the variables are related.", "title": "" }, { "docid": "97501db2db0fb83fef5cf4e30d1728d8", "text": "Autonomous automated vehicles are the next evolution in transportation and will improve safety, traffic efficiency and driving experience. Automated vehicles are equipped with multiple sensors (LiDAR, radar, camera, etc.) enabling local awareness of their surroundings. A fully automated vehicle will unconditionally rely on its sensors readings to make short-term (i.e. safety-related) and long-term (i.e. planning) driving decisions. In this context, sensors have to be robust against intentional or unintentional attacks that aim at lowering sensor data quality to disrupt the automation system. This paper presents remote attacks on camera-based system and LiDAR using commodity hardware. Results from laboratory experiments show effective blinding, jamming, replay, relay, and spoofing attacks. We propose software and hardware countermeasures that improve sensors resilience against these attacks.", "title": "" }, { "docid": "7f6e966f3f924e18cb3be0ae618309e6", "text": "designed shapes incorporating typedesign tradition, the rules related to visual appearance, and the design ideas of a skilled character designer. The typographic design process is structured and systematic: letterforms are visually related in weight, contrast, space, alignment, and style. To create a new typeface family, type designers generally start by designing a few key characters—such as o, h, p, and v— incorporating the most important structure elements such as vertical stems, round parts, diagonal bars, arches, and serifs (see Figure 1). They can then use the design features embedded into these structure elements (stem width, behavior of curved parts, contrast between thick and thin shape parts, and so on) to design the font’s remaining characters. Today’s industrial font description standards such as Adobe Type 1 or TrueType represent typographic characters by their shape outlines, because of the simplicity of digitizing the contours of well-designed, large-size master characters. However, outline characters only implicitly incorporate the designer’s intentions. Because their structure elements aren’t explicit, creating aesthetically appealing derived designs requiring coherent changes in character width, weight (boldness), and contrast is difficult. Outline characters aren’t suitable for optical scaling, which requires relatively fatter letter shapes at small sizes. Existing approaches for creating derived designs from outline fonts require either specifying constraints to maintain the coherence of structure elements across different characters or creating multiple master designs for the interpolation of derived designs. We present a new approach for describing and synthesizing typographic character shapes. Instead of describing characters by their outlines, we conceive each character as an assembly of structure elements (stems, bars, serifs, round parts, and arches) implemented by one or several shape components. We define the shape components by typeface-category-dependent global parameters such as the serif and junction types, by global font-dependent metrics such as the location of reference lines and the width of stems and curved parts, and by group and local parameters. (See the sidebar “Previous Work” for background information on the field of parameterizable fonts.)", "title": "" }, { "docid": "94535b71855026738a0dad677f14e5b8", "text": "Rule extraction (RE) from recurrent neural networks (RNNs) refers to finding models of the underlying RNN, typically in the form of finite state machines, that mimic the network to a satisfactory degree while having the advantage of being more transparent. RE from RNNs can be argued to allow a deeper and more profound form of analysis of RNNs than other, more or less ad hoc methods. RE may give us understanding of RNNs in the intermediate levels between quite abstract theoretical knowledge of RNNs as a class of computing devices and quantitative performance evaluations of RNN instantiations. The development of techniques for extraction of rules from RNNs has been an active field since the early 1990s. This article reviews the progress of this development and analyzes it in detail. In order to structure the survey and evaluate the techniques, a taxonomy specifically designed for this purpose has been developed. Moreover, important open research issues are identified that, if addressed properly, possibly can give the field a significant push forward.", "title": "" }, { "docid": "a8dc95d53c04f49231c8b4dea83c55f8", "text": "One of the main drawbacks of nonoverlapped coils in fractional slot concentrated winding permanent magnet (PM) machines are the high eddy current losses in both rotor core and permanent magnets induced by the asynchronous harmonics of the armature reaction field. It has been shown in the literature that the reduction of low space harmonics can effectively reduce the rotor eddy current losses. This paper shows that employing a combined star-delta winding to a three-phase PM machine with fractional slot windings and with a number of slots equal to 12, or its multiples, yields a complete cancellation to the fundamental magneto-motive force (MMF) component, which significantly reduces the induced rotor eddy current. Besides, it offers a slight increase in machine torque density. A case study on the well-known 12-slot/10-pole PM machine is conducted to explore the proposed approach. With the same concept, the general n-phase PM machine occupying 4n slots and with a dual n-phase winding is then proposed. This configuration offers a complete cancelation of all harmonics below the torque producing MMF component. Hence, the induced eddy currents in both rotor core and magnets are significantly reduced. The winding connection and the required number of turns for both winding groups are also given. The concept is applied to a 20-slot/18-pole stator with a dual five-phase winding, where the stator winding is connected as a combined star/pentagon connection. The proposed concept is assessed through a simulation study based on 2-D finite element analysis.", "title": "" }, { "docid": "77bb711327befd3f4169b4548cc5a85d", "text": "We present a new technique for learning visual-semantic embeddings for cross-modal retrieval. Inspired by hard negative mining, the use of hard negatives in structured prediction, and ranking loss functions, we introduce a simple change to common loss functions used for multi-modal embeddings. That, combined with fine-tuning and use of augmented data, yields significant gains in retrieval performance. We showcase our approach, VSE++, on MS-COCO and Flickr30K datasets, using ablation studies and comparisons with existing methods. On MS-COCO our approach outperforms state-ofthe-art methods by 8.8% in caption retrieval and 11.3% in image retrieval (at R@1).", "title": "" }, { "docid": "99fa507d3b36e1a42f0dbda5420e329a", "text": "Reference Points and Effort Provision A key open question for theories of reference-dependent preferences is what determines the reference point. One candidate is expectations: what people expect could affect how they feel about what actually occurs. In a real-effort experiment, we manipulate the rational expectations of subjects and check whether this manipulation influences their effort provision. We find that effort provision is significantly different between treatments in the way predicted by models of expectation-based reference-dependent preferences: if expectations are high, subjects work longer and earn more money than if expectations are low. JEL Classification: C91, D01, D84, J22", "title": "" }, { "docid": "d026b12bedce1782a17654f19c7dcdf7", "text": "The millions of movies produced in the human history are valuable resources for computer vision research. However, learning a vision model from movie data would meet with serious difficulties. A major obstacle is the computational cost – the length of a movie is often over one hour, which is substantially longer than the short video clips that previous study mostly focuses on. In this paper, we explore an alternative approach to learning vision models from movies. Specifically, we consider a framework comprised of a visual module and a temporal analysis module. Unlike conventional learning methods, the proposed approach learns these modules from different sets of data – the former from trailers while the latter from movies. This allows distinctive visual features to be learned within a reasonable budget while still preserving long-term temporal structures across an entire movie. We construct a large-scale dataset for this study and define a series of tasks on top. Experiments on this dataset showed that the proposed method can substantially reduce the training time while obtaining highly effective features and coherent temporal structures.", "title": "" }, { "docid": "05f36ee9c051f8f9ea6e48d4fdd28dae", "text": "While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching . In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension. Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class. A preliminary version of this paper appeared in the Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 303{314. August 1991. Most of this research was carried out while both authors were at MIT Laboratory for Computer Science with support provided by ARO Grant DAAL03-86-K-0171, DARPA Contract N00014-89-J-1988, NSF Grant CCR-88914428, and a grant from the Siemens Corporation. S. Goldman is currently supported in part by a G.E. Foundation Junior Faculty Grant and NSF Grant CCR-9110108.", "title": "" }, { "docid": "519b0dbeb1193a14a06ba212790f49d4", "text": "In recent years, sign language recognition has attracted much attention in computer vision . A sign language is a means of conveying the message by using hand, arm, body, and face to convey thoughts and meanings. Like spoken languages, sign languages emerge and evolve naturally within hearing-impaired communities. However, sign languages are not universal. There is no internationally recognized and standardized sign language for all deaf people. As is the case in spoken language, every country has got its own sign language with high degree of grammatical variations. The sign language used in India is commonly known as Indian Sign Language (henceforth called ISL).", "title": "" }, { "docid": "131517391d81c321f922e2c1507bb247", "text": "This study was undertaken to apply recurrent neural networks to the recognition of stock price patterns, and to develop a new method for evaluating the networks. In stock tradings, triangle patterns indicate an important clue to the trend of future change in stock prices, but the patterns are not clearly defined by rule-based approaches. From stock price data for all names of corporations listed in The First Section of Tokyo Stock Exchange, an expert called c h a d reader extracted sixteen triangles. These patterns were divided into two groups, 15 training patterns and one test pattern. Using stock data during past 3 years for 16 names, 16 experiments for the recognition were carried out, where the groups were cyclically used. The experiments revealed that the given test triangle was accurately recognized in 15 out of 16 experiments, and that the number of the mismatching patterns was 1.06 per name on the average. A new method was developed for evaluating recurrent networks with context transition performances, in particular, temporal transition performances. The method for the triangle sequences is applicable to decrease in mismatching patterns. By applying a cluster analysis to context vectors generated in the networks at recognition stage, a transition chart for context vector categorization was obtained for each stock price sequence. The finishing categories for the context vectors in the charts indicated that this method was effective in decreasing mismatching patterns.", "title": "" }, { "docid": "d5f2cb3839a8e129253e3433b9e9a5bc", "text": "Product classification in Commerce search (\\eg{} Google Product Search, Bing Shopping) involves associating categories to offers of products from a large number of merchants. The categorized offers are used in many tasks including product taxonomy browsing and matching merchant offers to products in the catalog. Hence, learning a product classifier with high precision and recall is of fundamental importance in order to provide high quality shopping experience. A product offer typically consists of a short textual description and an image depicting the product. Traditional approaches to this classification task is to learn a classifier using only the textual descriptions of the products. In this paper, we show that the use of images, a weaker signal in our setting, in conjunction with the textual descriptions, a more discriminative signal, can considerably improve the precision of the classification task, irrespective of the type of classifier being used. We present a novel classification approach, \\Cross Adapt{} (\\CrossAdaptAcro{}), that is cognizant of the disparity in the discriminative power of different types of signals and hence makes use of the confusion matrix of dominant signal (text in our setting) to prudently leverage the weaker signal (image), for an improved performance. Our evaluation performed on data from a major Commerce search engine's catalog shows a 12\\% (absolute) improvement in precision at 100\\% coverage, and a 16\\% (absolute) improvement in recall at 90\\% precision compared to classifiers that only use textual description of products. In addition, \\CrossAdaptAcro{} also provides a more accurate classifier based only on the dominant signal (text) that can be used in situations in which only the dominant signal is available during application time.", "title": "" }, { "docid": "1a10e38cfc5cad20c64709c59053ffad", "text": "Corporate and product brands are increasingly accepted as valuable intangible assets of organisations, evidence of which is apparent in the reported fi nancial value that strong brands fetch when traded in the mergers and acquisitions markets. However, while much attention is paid to conceptualising brand equity, less is paid to how brands should be managed and delivered in order to create and safeguard brand equity. In this article we develop a conceptual model of corporate brand management for creating and safeguarding brand equity. We argue that while legal protection of the brand is important, by itself it is insuffi cient to protect brand equity in the long term. We suggest that brand management ought to play an important role in safeguarding brand equity and propose a three-stage conceptual model for building and sustaining brand equity comprising: (1) adopting a brandorientation mindset, (2) developing internal branding capabilities, and (3) consistent delivery of the brand. We put forward propositions, which, taken together, form a theory of brand management for building and safeguarding brand equity. We illustrate the theory using 14 cases of award-winning service companies. Their use serves as a demonstration of how our model applies to brand management", "title": "" }, { "docid": "1ccc1b904fa58b1e31f4f3f4e2d76707", "text": "When children and adolescents are the target population in dietary surveys many different respondent and observer considerations surface. The cognitive abilities required to self-report food intake include an adequately developed concept of time, a good memory and attention span, and a knowledge of the names of foods. From the age of 8 years there is a rapid increase in the ability of children to self-report food intake. However, while cognitive abilities should be fully developed by adolescence, issues of motivation and body image may hinder willingness to report. Ten validation studies of energy intake data have demonstrated that mis-reporting, usually in the direction of under-reporting, is likely. Patterns of under-reporting vary with age, and are influenced by weight status and the dietary survey method used. Furthermore, evidence for the existence of subject-specific responding in dietary assessment challenges the assumption that repeated measurements of dietary intake will eventually obtain valid data. Unfortunately, the ability to detect mis-reporters, by comparison with presumed energy requirements, is limited unless detailed activity information is available to allow the energy intake of each subject to be evaluated individually. In addition, high variability in nutrient intakes implies that, if intakes are valid, prolonged dietary recording will be required to rank children correctly for distribution analysis. Future research should focus on refining dietary survey methods to make them more sensitive to different ages and cognitive abilities. The development of improved techniques for identification of mis-reporters and investigation of the issue of differential reporting of foods should also be given priority.", "title": "" }, { "docid": "3ad47c45135498f6ed94004e28028f6e", "text": "This paper describes the theory and implementation of Bayesian networks in the context of automatic speech recognition. Bayesian networks provide a succinct and expressive graphical language for factoring joint probability distributions, and we begin by presenting the structures that are appropriate for doing speech recognition training and decoding. This approach is notable because it expresses all the details of a speech recognition system in a uniform way using only the concepts of random variables and conditional probabilities. A powerful set of computational routines complements the representational utility of Bayesian networks, and the second part of this paper describes these algorithms in detail. We present a novel view of inference in general networks – where inference is done via a change-of-variables that renders the network tree-structured and amenable to a very simple form of inference. We present the technique in terms of straightforward dynamic programming recursions analogous to HMM a–b computation, and then extend it to handle deterministic constraints amongst variables in an extremely efficient manner. The paper concludes with a sequence of experimental results that show the range of effects that can be modeled, and that significant reductions in error-rate can be expected from intelligently factored state representations. 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "5bd2a871d376cf2702e38ee7777b0060", "text": "Interconnected smart vehicles offer a range of sophisticated services that benefit the vehicle owners, transport authorities, car manufacturers, and other service providers. This potentially exposes smart vehicles to a range of security and privacy threats such as location tracking or remote hijacking of the vehicle. In this article, we argue that blockchain (BC), a disruptive technology that has found many applications from cryptocurrencies to smart contracts, is a potential solution to these challenges. We propose a BC-based architecture to protect the privacy of users and to increase the security of the vehicular ecosystem. Wireless remote software updates and other emerging services such as dynamic vehicle insurance fees are used to illustrate the efficacy of the proposed security architecture. We also qualitatively argue the resilience of the architecture against common security attacks.", "title": "" }, { "docid": "0f10bb2afc1797fad603d8c571058ecb", "text": "This paper presents findings from the All Wales Hate Crime Project. Most hate crime research has focused on discrete victim types in isolation. For the first time, internationally, this paper examines the psychological and physical impacts of hate crime across seven victim types drawing on quantitative and qualitative data. It contributes to the hate crime debate in two significant ways: (1) it provides the first look at the problem in Wales and (2) it provides the first multi-victim-type analysis of hate crime, showing that impacts are not homogenous across victim groups. The paper provides empirical credibility to the impacts felt by hate crime victims on the margins who have routinely struggled to gain support.", "title": "" } ]
scidocsrr
db6d846dc73ff64e1e6cb98dd3d8ffc5
The effects of level of automation and adaptive automation on human performance , situation awareness and workload in a dynamic control task
[ { "docid": "76d5bb6cd7e6ee374a958100adb4b1b1", "text": "Technical developments in computer hardware and software now make it possible to introduce automation into virtually all aspects of human-machine systems. Given these technical capabilities, which system functions should be automated and to what extent? We outline a model for types and levels of automation that provides a framework and an objective basis for making such choices. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.", "title": "" } ]
[ { "docid": "721a64c9a5523ba836318edcdb8de021", "text": "Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.", "title": "" }, { "docid": "aa077e684f3cde9b1b4928c176c3d07b", "text": "As machine learning models continue to increase in complexity, collecting large hand-labeled training sets has become one of the biggest roadblocks in practice. Instead, weaker forms of supervision that provide noisier but cheaper labels are often used. However, these weak supervision sources have diverse and unknown accuracies, may output correlated labels, and may label different tasks or apply at different levels of granularity. We propose a framework for integrating and modeling such weak supervision sources by viewing them as labeling different related sub-tasks of a problem, which we refer to as the multi-task weak supervision setting. We show that by solving a matrix completion-style problem, we can recover the accuracies of these multi-task sources given their dependency structure, but without any labeled data, leading to higher-quality supervision for training an end model. Theoretically, we show that the generalization error of models trained with this approach improves with the number of unlabeled data points, and characterize the scaling with respect to the task and dependency structures. On three fine-grained classification problems, we show that our approach leads to average gains of 20.2 points in accuracy over a traditional supervised approach, 6.8 points over a majority vote baseline, and 4.1 points over a previously proposed weak supervision method that models tasks separately.", "title": "" }, { "docid": "44a84af55421c88347034d6dc14e4e30", "text": "Anomaly detection plays an important role in protecting computer systems from unforeseen attack by automatically recognizing and filter atypical inputs. However, it can be difficult to balance the sensitivity of a detector – an aggressive system can filter too many benign inputs while a conservative system can fail to catch anomalies. Accordingly, it is important to rigorously test anomaly detectors to evaluate potential error rates before deployment. However, principled systems for doing so have not been studied – testing is typically ad hoc, making it difficult to reproduce results or formally compare detectors. To address this issue we present a technique and implemented system, Fortuna, for obtaining probabilistic bounds on false positive rates for anomaly detectors that process Internet data. Using a probability distribution based on PageRank and an efficient algorithm to draw samples from the distribution, Fortuna computes an estimated false positive rate and a probabilistic bound on the estimate’s accuracy. By drawing test samples from a well defined distribution that correlates well with data seen in practice, Fortuna improves on ad hoc methods for estimating false positive rate, giving bounds that are reproducible, comparable across different anomaly detectors, and theoretically sound. Experimental evaluations of three anomaly detectors (SIFT, SOAP, and JSAND) show that Fortuna is efficient enough to use in practice — it can sample enough inputs to obtain tight false positive rate bounds in less than 10 hours for all three detectors. These results indicate that Fortuna can, in practice, help place anomaly detection on a stronger theoretical foundation and help practitioners better understand the behavior and consequences of the anomaly detectors that they deploy. As part of our work, we obtain a theoretical result that may be of independent interest: We give a simple analysis of the convergence rate of the random surfer process defining PageRank that guarantees the same rate as the standard, second-eigenvalue analysis, but does not rely on any assumptions about the link structure of the web.", "title": "" }, { "docid": "d5a4c2d61e7d65f1972ed934f399847e", "text": "We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor/action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor/action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.", "title": "" }, { "docid": "b5f8f310f2f4ed083b20f42446d27feb", "text": "This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.", "title": "" }, { "docid": "af9768101a634ab57eb2554953ef63ec", "text": "Very recently, there has been a perfect storm of technical advances that has culminated in the emergence of a new interaction modality: on-body interfaces. Such systems enable the wearer to use their body as an input and output platform with interactive graphics. Projects such as PALMbit and Skinput sought to answer the initial and fundamental question: whether or not on-body interfaces were technologically possible. Although considerable technical work remains, we believe it is important to begin shifting the question away from how and what, and towards where, and ultimately why. These are the class of questions that inform the design of next generation systems. To better understand and explore this expansive space, we employed a mixed-methods research process involving more than two thousand individuals. This started with high-resolution, but low-detail crowdsourced data. We then combined this with rich, expert interviews, exploring aspects ranging from aesthetics to kinesthetics. The results of this complimentary, structured exploration, point the way towards more comfortable, efficacious, and enjoyable on-body user experiences.", "title": "" }, { "docid": "aea0aeea95d251b5a7102825ad5c66ce", "text": "The life time extension in the wireless sensor network (WSN) is the major concern in real time application, if the battery attached with the sensor node life is not optimized properly then the network life fall short. A protocol using a new evolutionary technique, cat swarm optimization (CSO), is designed and implemented in real time to minimize the intra-cluster distances between the cluster members and their cluster heads and optimize the energy distribution for the WSNs. We analyzed the performance of WSN protocol with the help of sensor nodes deployed in a field and grouped in to clusters. The novelty in our proposed scheme is considering the received signal strength, residual battery voltage and intra cluster distance of sensor nodes in cluster head selection with the help of CSO. The result is compared with the well-known protocol Low-energy adaptive clustering hierarchy-centralized (LEACH-C) and the swarm based optimization technique Particle swarm optimization (PSO). It was found that the battery energy level has been increased considerably of the traditional LEACH and PSO algorithm.", "title": "" }, { "docid": "a39f11e64ba8347b212b7e34fa434f32", "text": "This paper proposes a fully distributed multiagent-based reinforcement learning method for optimal reactive power dispatch. According to the method, two agents communicate with each other only if their corresponding buses are electrically coupled. The global rewards that are required for learning are obtained with a consensus-based global information discovery algorithm, which has been demonstrated to be efficient and reliable. Based on the discovered global rewards, a distributed Q-learning algorithm is implemented to minimize the active power loss while satisfying operational constraints. The proposed method does not require accurate system model and can learn from scratch. Simulation studies with power systems of different sizes show that the method is very computationally efficient and able to provide near-optimal solutions. It can be observed that prior knowledge can significantly speed up the learning process and decrease the occurrences of undesirable disturbances. The proposed method has good potential for online implementation.", "title": "" }, { "docid": "048359b540d2fbd0f2c304fd33bcad8a", "text": "OBJECTIVE\nTo evaluate the association between prior invasive gynecologic procedures and the risk of subsequent abnormally invasive placenta (ie, placenta accreta, increta, and percreta).\n\n\nMETHODS\nWe conducted a population-based data linkage study including all primiparous women who delivered in New South Wales, Australia, between 2003 and 2012. Data were obtained from linked birth and hospital admissions with a minimum lookback period of 2 years. Prior procedures invasive of the uterus were considered including gynecologic laparoscopy with instrumentation of the uterus; hysteroscopy, including operative hysteroscopy; curettage, including suction curettage and surgical termination; and endometrial ablation. Modified Poisson regression was used to determine the association between the number of prior gynecologic procedures and risk of abnormally invasive placenta.\n\n\nRESULTS\nEight hundred fifty-four cases of abnormally invasive placenta were identified among 380,775 deliveries included in the study (22.4/10,000). In total, 33,296 primiparous women had at least one prior procedure (8.7%). Among women with abnormally invasive placenta, 152 (17.8%) had undergone at least one procedure compared with 33,144 (8.7%) among women without abnormally invasive placenta (P<.01). After adjustment, the relative risk was 1.5 for one procedure (99% CI 1.1-1.9), 2.7 for two procedures (99% CI 1.7-4.4), and 5.1 for three or more procedures (99% CI 2.7-9.6). Abnormally invasive placenta was also positively associated with maternal age, socioeconomic advantage, mother being Australia-born, placenta previa, hypertension, multiple births, use of assisted reproductive technology, and female fetal sex.\n\n\nCONCLUSION\nWomen with a history of prior invasive gynecologic procedures were more likely to develop abnormally invasive placenta. These insights may be used to inform management of pregnancies in women with a history of gynecologic procedures.", "title": "" }, { "docid": "7cd87a6e9890b55cdac1c6231833d63f", "text": "Although the benefits of Object-Orientation are manifold and it is, for certain, one of the mainstays for software production in the future, it will only achieve widespread practical acceptance when the management aspects of the software development process using this technology are carefully addressed. Here, software metrics play an important role allowing, among other things, better planning, the assessment of improvements, the reduction of unpredictability, early identification of potential problems and productivity evaluation. This paper proposes a set of metrics suitable for evaluating the use of the main abstractions of the Object-Oriented paradigm such as inheritance, encapsulation, information hiding or polymorphism and the consequent emphasis on reuse that, together, are believed to be responsible for the increase in software quality and development productivity. Those metrics are aimed at helping to establish comparisons throughout the practitioners’ community and setting design recommendations that may eventually become organization standards. Some desirable properties for such a metrics set are also presented. Future lines of research are envisaged.", "title": "" }, { "docid": "79ea2c1566b3bb1e27fe715b1a1a385b", "text": "The number of research papers available is growing at a staggering rate. Researchers need tools to help them find the papers they should read among all the papers published each year. In this paper, we present and experiment with hybrid recommender algorithms that combine Collaborative Filtering and Content-based. Filtering to recommend research papers to users. Our hybrid algorithms combine the strengths of each filtering approach to address their individual weaknesses. We evaluated our algorithms through offline experiments on a database of 102, 000 research papers, and through an online experiment with 110 users. For both experiments we used a dataset created from the CiteSeer repository of computer science research papers. We developed separate English and Portuguese versions of the interface and specifically recruited American and Brazilian users to test for cross-cultural effects. Our results show that users value paper recommendations, that the hybrid algorithms can be successfully combined, that different algorithms are more suitable for recommending different kinds of papers, and that users with different levels of experience perceive recommendations differently These results can be applied to develop recommender systems for other types of digital libraries.", "title": "" }, { "docid": "6a53ba46206f5149d6b86057aa37d127", "text": "BACKGROUND\nThe structured system for peer assisted learning in writing named Paired Writing (Topping, 1995) incorporates both metacognitive prompting and scaffolding for the interactive process.\n\n\nAIM\nThis study sought to evaluate the relative contribution of these two components to student gain in quality of writing and attitudes to writing, while controlling for amount of writing practice and teacher effects.\n\n\nSAMPLE\nParticipants were 28 ten- and eleven-year-old students forming a problematic mixed ability class.\n\n\nMETHODS\nAll received training in Paired Writing and its inherent metacognitive prompting. Students matched by gender and pre-test writing scores were assigned randomly to Interaction or No Interaction conditions. In the Interaction condition, the more able writers became 'tutors' for the less able. In the No Interaction condition, the more able writers acted as controls for the tutors and the less able as controls for the tutees. Over six weeks, the paired writers produced five pieces of personal writing collaboratively, while children in the No Interaction condition did so alone.\n\n\nRESULTS\nOn pre- and post-project analyses of the quality of individual writing, all groups showed statistically significant improvements in writing. However, the pre-post gains of the children who wrote interactively were significantly greater than those of the lone writers. There was some evidence that the paired writers also had more positive self-esteem as writers.\n\n\nCONCLUSION\nThe operation and durability of the Paired Writing system are discussed.", "title": "" }, { "docid": "36b7b37429a8df82e611df06303a8fcb", "text": "Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs) – semantic-preserving perturbations that induce changes in the model’s predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs) – simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.", "title": "" }, { "docid": "d62129c82df200ce80be4f3865bccffc", "text": "In recent years, different web knowledge graphs, both free and commercial, have been created. Knowledge graphs use relations between entities to describe facts in the world. We engage in embedding a large scale knowledge graph into a continuous vector space. TransE, TransH, TransR and TransD are promising methods proposed in recent years and achieved state-of-the-art predictive performance. In this paper, we discuss that graph structures should be considered in embedding and propose to embed substructures called “one-relation-circle” (ORC) to further improve the performance of the above methods as they are unable to encode ORC substructures. Some complex models are capable of handling ORC structures but sacrifice efficiency in the process. To make a good trade-off between the model capacity and efficiency, we propose a method to decompose ORC substructures by using two vectors to represent the entity as a head or tail entity with the same relation. In this way, we can encode the ORC structure properly when apply it to TransH, TransR and TransD with almost the same model complexity of themselves. We conduct experiments on link prediction with benchmark dataset WordNet. Our experiments show that applying our method improves the results compared with the corresponding original results of TransH, TransR and TransD.", "title": "" }, { "docid": "caf333abcf4e22b973532bb3bc48cc90", "text": "This paper presents a multi-layer secure IoT network model based on blockchain technology. The model reduces the difficulty of actual deployment of the blockchain technology by dividing the Internet of Things into a multi-level de-centric network and adopting the technology of block chain technology at all levels of the network, with the high security and credibility assurance of the blockchain technology retaining. It provides a wide-area networking solution of Internet of Things.", "title": "" }, { "docid": "bf7679eedfe88210b70105d50ae8acf4", "text": "Figure 1: Latent space of unsupervised VGAE model trained on Cora citation network dataset [1]. Grey lines denote citation links. Colors denote document class (not provided during training). Best viewed on screen. We introduce the variational graph autoencoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE) [2, 3]. This model makes use of latent variables and is capable of learning interpretable latent representations for undirected graphs (see Figure 1).", "title": "" }, { "docid": "fb1a178c7c097fbbf0921dcef915dc55", "text": "AIMS\nThe management of open lower limb fractures in the United Kingdom has evolved over the last ten years with the introduction of major trauma networks (MTNs), the publication of standards of care and the wide acceptance of a combined orthopaedic and plastic surgical approach to management. The aims of this study were to report recent changes in outcome of open tibial fractures following the implementation of these changes.\n\n\nPATIENTS AND METHODS\nData on all patients with an open tibial fracture presenting to a major trauma centre between 2011 and 2012 were collected prospectively. The treatment and outcomes of the 65 Gustilo Anderson Grade III B tibial fractures were compared with historical data from the same unit.\n\n\nRESULTS\nThe volume of cases, the proportion of patients directly admitted and undergoing first debridement in a major trauma centre all increased. The rate of limb salvage was maintained at 94% and a successful limb reconstruction rate of 98.5% was achieved. The rate of deep bone infection improved to 1.6% (one patient) in the follow-up period.\n\n\nCONCLUSION\nThe reasons for these improvements are multifactorial, but the major trauma network facilitating early presentation to the major trauma centre, senior orthopaedic and plastic surgical involvement at every stage and proactive microbiological management, may be important factors.\n\n\nTAKE HOME MESSAGE\nThis study demonstrates that a systemised trauma network combined with evidence based practice can lead to improvements in patient care.", "title": "" }, { "docid": "e90c165a3e16035b56a4bb4ceb9282ed", "text": "Point of care testing (POCT) refers to laboratory testing that occurs near to the patient, often at the patient bedside. POCT can be advantageous in situations requiring rapid turnaround time of test results for clinical decision making. There are many challenges associated with POCT, mainly related to quality assurance. POCT is performed by clinical staff rather than laboratory trained individuals which can lead to errors resulting from a lack of understanding of the importance of quality control and quality assurance practices. POCT is usually more expensive than testing performed in the central laboratory and requires a significant amount of support from the laboratory to ensure the quality testing and meet accreditation requirements. Here, specific challenges related to POCT compliance with accreditation standards are discussed along with strategies that can be used to overcome these challenges. These areas include: documentation of POCT orders, charting of POCT results as well as training and certification of individuals performing POCT. Factors to consider when implementing connectivity between POCT instruments and the electronic medical record are also discussed in detail and include: uni-directional versus bidirectional communication, linking patient demographic information with POCT software, the importance of positive patient identification and considering where to chart POCT results in the electronic medical record.", "title": "" }, { "docid": "eec60b309731ef2f0adbfe94324a2ca0", "text": "Wireless sensor networks are those networks which are composed by the collection of very small devices mainly named as nodes. These nodes are integrated with small battery life which is very hard or impossible to replace or reinstate. For the sensing, gathering and processing capabilities, the usage of battery is must. Therefore, the battery life of Wireless Sensor Networks should be as large as possible in order to sense the information around it or in which the nodes are placed. The concept of hierarchical routing is mainly highlighted in this paper, in which the nodes work in a hierarchical manner by the formation of Cluster Head within a Cluster. These formed Cluster Heads then transfer the data or information in the form of packets from one cluster to another. In this work, the protocol used for the simulation is Low Energy adaptive Clustering Hierarchy which is one of the most efficient protocol. The nodes are of homogeneous in nature. The simulator used is MATLAB along with Cuckoo Search Algorithm. The Simulation results have been taken out showing the effectiveness of protocol with Cuckoo Search. Keywords— Wireless Sensor Network (WSN), Low Energy adaptive Clustering Hierarchy (LEACH), Cuckoo Search, Cluster Head (CH), Base Station (BS).", "title": "" }, { "docid": "ae408c0748466d1492636e8ebd68e7a2", "text": "Outside the highly publicized victories in the game of Go, there have been numerous successful applications of deep learning in the fields of information retrieval, computer vision, and speech recognition. In cybersecurity, an increasing number of companies have begun exploring the use of deep learning (DL) in a variety of security tasks with malware detection among the more popular. These companies claim that deep neural networks (DNNs) could help turn the tide in the war against malware infection. However, DNNs are vulnerable to adversarial samples, a shortcoming that plagues most, if not all, statistical and machine learning models. Recent research has demonstrated that those with malicious intent can easily circumvent deep learning-powered malware detection by exploiting this weakness.\n To address this problem, previous work developed defense mechanisms that are based on augmenting training data or enhancing model complexity. However, after analyzing DNN susceptibility to adversarial samples, we discover that the current defense mechanisms are limited and, more importantly, cannot provide theoretical guarantees of robustness against adversarial sampled-based attacks. As such, we propose a new adversary resistant technique that obstructs attackers from constructing impactful adversarial samples by randomly nullifying features within data vectors. Our proposed technique is evaluated on a real world dataset with 14,679 malware variants and 17,399 benign programs. We theoretically validate the robustness of our technique, and empirically show that our technique significantly boosts DNN robustness to adversarial samples while maintaining high accuracy in classification. To demonstrate the general applicability of our proposed method, we also conduct experiments using the MNIST and CIFAR-10 datasets, widely used in image recognition research.", "title": "" } ]
scidocsrr
072886dca67cf7b844206b28e21f408c
Diesel engine performance and exhaust emission analysis using waste cooking biodiesel fuel with an artificial neural network
[ { "docid": "e44e5c574fda3f03f8ec21f04eb1c417", "text": "Biodiesel (fatty acid methyl esters), which is derived from triglycerides by transesterification with methanol, has attracted considerable attention during the past decade as a renewable, biodegradable, and nontoxic fuel. Several processes for biodiesel fuel production have been developed, among which transesterification using alkali-catalysis gives high levels of conversion of triglycerides to their corresponding methyl esters in short reaction times. This process has therefore been widely utilized for biodiesel fuel production in a number of countries. Recently, enzymatic transesterification using lipase has become more attractive for biodiesel fuel production, since the glycerol produced as a by-product can easily be recovered and the purification of fatty methyl esters is simple to accomplish. The main hurdle to the commercialization of this system is the cost of lipase production. As a means of reducing the cost, the use of whole cell biocatalysts immobilized within biomass support particles is significantly advantageous since immobilization can be achieved spontaneously during batch cultivation, and in addition, no purification is necessary. The lipase production cost can be further lowered using genetic engineering technology, such as by developing lipases with high levels of expression and/or stability towards methanol. Hence, whole cell biocatalysts appear to have great potential for industrial application.", "title": "" } ]
[ { "docid": "fd2d04af3b259a433eb565a41b11ffbd", "text": "OVERVIEW • We develop novel orthogonality regularizations on training deep CNNs, by borrowing ideas and tools from sparse optimization. • These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. • The proposed regularizations can consistently improve the performances of baseline deep networks on CIFAR-10/100, ImageNet and SVHN datasets, based on intensive empirical experiments, as well as accelerate/stabilize the training curves. • The proposed orthogonal regularizations outperform existing competitors.", "title": "" }, { "docid": "2937c5cd1848daa74bb35aaba80890b7", "text": "Neurofeedback (NF) is a training to enhance self-regulatory capacity over brain activity patterns and consequently over brain mental states. Recent findings suggest that NF is a promising alternative for the treatment of attention-deficit/hyperactivity disorder (ADHD). We comprehensively reviewed literature searching for studies on the effectiveness and specificity of NF for the treatment of ADHD. In addition, clinically informative evidence-based data are discussed. We found 3 systematic review on the use of NF for ADHD and 6 randomized controlled trials that have not been included in these reviews. Most nonrandomized controlled trials found positive results with medium-to-large effect sizes, but the evidence for effectiveness are less robust when only randomized controlled studies are considered. The direct comparison of NF and sham-NF in 3 published studies have found no group differences, nevertheless methodological caveats, such as the quality of the training protocol used, sample size, and sample selection may have contributed to the negative results. Further data on specificity comes from electrophysiological studies reporting that NF effectively changes brain activity patterns. No safety issues have emerged from clinical trials and NF seems to be well tolerated and accepted. Follow-up studies support long-term effects of NF. Currently there is no available data to guide clinicians on the predictors of response to NF and on optimal treatment protocol. In conclusion, NF is a valid option for the treatment for ADHD, but further evidence is required to guide its use.", "title": "" }, { "docid": "dbafe7db0387b56464ac630404875465", "text": "Recognition of body posture and motion is an important physiological function that can keep the body in balance. Man-made motion sensors have also been widely applied for a broad array of biomedical applications including diagnosis of balance disorders and evaluation of energy expenditure. This paper reviews the state-of-the-art sensing components utilized for body motion measurement. The anatomy and working principles of a natural body motion sensor, the human vestibular system, are first described. Various man-made inertial sensors are then elaborated based on their distinctive sensing mechanisms. In particular, both the conventional solid-state motion sensors and the emerging non solid-state motion sensors are depicted. With their lower cost and increased intelligence, man-made motion sensors are expected to play an increasingly important role in biomedical systems for basic research as well as clinical diagnostics.", "title": "" }, { "docid": "c6a25dc466e4a22351359f17bd29916c", "text": "We consider practical methods for adding constraints to the K-Means clustering algorithm in order to avoid local solutions with empty clusters or clusters having very few points. We often observe this phenomena when applying K-Means to datasets where the number of dimensions is n 10 and the number of desired clusters is k 20. We propose explicitly adding k constraints to the underlying clustering optimization problem requiring that each cluster have at least a minimum number of points in it. We then investigate the resulting cluster assignment step. Preliminary numerical tests on real datasets indicate the constrained approach is less prone to poor local solutions, producing a better summary of the underlying data. Contrained K-Means Clustering 1", "title": "" }, { "docid": "54899cac2cd13865e117d800bb21fb8b", "text": "The purpose of this study is to give a detailed performance comparison about the feature detector and descriptor methods, particularly when their various combinations are used for image matching. As the case study, the localization experiments of a mobile robot in an indoor environment are given. In these experiments, 3090 query images and 127 dataset images are used. This study includes five methods for feature detectors such as features from accelerated segment test (FAST), oriented FAST and rotated binary robust independent elementary features (BRIEF) (ORB), speeded-up robust features (SURF), scale invariant feature transform (SIFT), binary robust invariant scalable keypoints (BRISK), and five other methods for feature descriptors which are BRIEF, BRISK, SIFT, SURF, and ORB. These methods are used in 23 different combinations and it was possible to obtain meaningful and consistent comparison results using some performance criteria defined in this study. All of these methods are used independently and separately from each other as being feature detector or descriptor. The performance analysis shows the discriminative power of various combinations of detector and descriptor methods. The analysis is completed using five parameters such as (i) accuracy, (ii) time, (iii) angle difference between keypoints, (iv) number of correct matches, and (v) distance between correctly matched keypoints. In a range of 60°, covering five rotational pose points for our system, “FAST-SURF” combination gave the best results with the lowest distance and angle difference values and highest number of matched keypoints. The combination “SIFT-SURF” is obtained as the most accurate combination with 98.41% of correct classification rate. The fastest algorithm is achieved with “ORB-BRIEF” combination with a total running time 21303.30 seconds in order to match 560 images captured during the motion with 127 dataset images.", "title": "" }, { "docid": "fd8f4206ae749136806a35c0fe1597c7", "text": "In this paper, an inductor-inductor-capacitor (LLC) resonant dc-dc converter design procedure for an onboard lithium-ion battery charger of a plug-in hybrid electric vehicle (PHEV) is presented. Unlike traditional resistive load applications, the characteristic of a battery load is nonlinear and highly related to the charging profiles. Based on the features of an LLC converter and the characteristics of the charging profiles, the design considerations are studied thoroughly. The worst-case conditions for primary-side zero-voltage switching (ZVS) operation are analytically identified based on fundamental harmonic approximation when a constant maximum power (CMP) charging profile is implemented. Then, the worst-case operating point is used as the design targeted point to ensure soft-switching operation globally. To avoid the inaccuracy of fundamental harmonic approximation approach in the below-resonance region, the design constraints are derived based on a specific operation mode analysis. Finally, a step-by-step design methodology is proposed and validated through experiments on a prototype converting 400 V from the input to an output voltage range of 250-450 V at 3.3 kW with a peak efficiency of 98.2%.", "title": "" }, { "docid": "66b2ca04ed0b1435d525f04cd81969ac", "text": "Over the past couple of decades, trends in both microarchitecture and underlying semiconductor technology have significantly reduced microprocessor clock periods. These trends have significantly increased relative main-memory latencies as measured in processor clock cycles. To avoid large performance losses caused by long memory access delays, microprocessors rely heavily on a hierarchy of cache memories. But cache memories are not always effective, either because they are not large enough to hold a program's working set, or because memory access patterns don't exhibit behavior that matches a cache memory's demand-driven, line-structured organization. To partially overcome cache memories' limitations, we organize data cache prefetch information in a new way, a GHB (global history buffer) supports existing prefetch algorithms more effectively than conventional prefetch tables. It reduces stale table data, improving accuracy and reducing memory traffic. It contains a more complete picture of cache miss history and is smaller than conventional tables.", "title": "" }, { "docid": "8e53fff50063f2956e8f65e14bec77a4", "text": "Mobile Edge Computing (MEC) provides mobile and cloud computing capabilities within the access network, and aims to unite the telco and IT at the mobile network edge. This paper presents an investigation on the progress of MEC, and proposes a platform, named WiCloud, to provide edge networking, proximate computing and data acquisition for innovative services. Furthermore, the open challenges that must be addressed before the commercial deployment of MEC are discussed.", "title": "" }, { "docid": "79f5415cfc7f89685227abb130cd75e5", "text": "Software engineering is knowledge-intensive work, and how to manage software engineering knowledge has received much attention. This systematic review identifies empirical studies of knowledge management initiatives in software engineering, and discusses the concepts studied, the major findings, and the research methods used. Seven hundred and sixty-two articles were identified, of which 68 were studies in an industry context. Of these, 29 were empirical studies and 39 reports of lessons learned. More than half of the empirical studies were case studies. The majority of empirical studies relate to technocratic and behavioural aspects of knowledge management, while there are few studies relating to economic, spatial and cartographic approaches. A finding reported across multiple papers was the need to not focus exclusively on explicit knowledge, but also consider tacit knowledge. We also describe implications for research and for practice.", "title": "" }, { "docid": "88f60c6835fed23e12c56fba618ff931", "text": "Design of fault tolerant systems is a popular subject in flight control system design. In particular, adaptive control approach has been successful in recovering aircraft in a wide variety of different actuator/sensor failure scenarios. However, if the aircraft goes under a severe actuator failure, control system might not be able to adapt fast enough to changes in the dynamics, which would result in performance degradation or even loss of the aircraft. Inspired by the recent success of deep learning applications, this work builds a hybrid recurren-t/convolutional neural network model to estimate adaptation parameters for aircraft dynamics under actuator/engine faults. The model is trained offline from a database of different failure scenarios. In case of an actuator/engine failure, the model identifies adaptation parameters and feeds this information to the adaptive control system, which results in significantly faster convergence of the controller coefficients. Developed control system is implemented on a nonlinear 6-DOF F-16 aircraft, and the results show that the proposed architecture is especially beneficial in severe failure scenarios.", "title": "" }, { "docid": "8a2f40f2a0082fae378c7907a60159ac", "text": "We present a novel graph-based neural network model for relation extraction. Our model treats multiple pairs in a sentence simultaneously and considers interactions among them. All the entities in a sentence are placed as nodes in a fully-connected graph structure. The edges are represented with position-aware contexts around the entity pairs. In order to consider different relation paths between two entities, we construct up to l-length walks between each pair. The resulting walks are merged and iteratively used to update the edge representations into longer walks representations. We show that the model achieves performance comparable to the state-ofthe-art systems on the ACE 2005 dataset without using any external tools.", "title": "" }, { "docid": "86846cd0bc21747e651191a170ad6af7", "text": "Recent advances in deep learning have enabled researchers across many disciplines to uncover new insights about large datasets. Deep neural networks have shown applicability to image, time-series, textual, and other data, all of which are available in a plethora of research fields. However, their computational complexity and large memory overhead requires advanced software and hardware technologies to train neural networks in a reasonable amount of time. To make this possible, there has been an influx in development of deep learning software that aim to leverage advanced hardware resources. In order to better understand the performance implications of deep learning frameworks over these different resources, we analyze the performance of three different frameworks, Caffe, TensorFlow, and Apache SINGA, over several hardware environments. This includes scaling up and out with single-and multi-node setups using different CPU and GPU technologies. Notably, we investigate the performance characteristics of NVIDIA's state-of-the-art hardware technology, NVLink, and also Intel's Knights Landing, the most advanced Intel product for deep learning, with respect to training time and utilization. To our best knowledge, this is the first work concerning deep learning bench-marking with NVLink and Knights Landing. Through these experiments, we provide analysis of the frameworks' performance over different hardware environments in terms of speed and scaling. As a result of this work, better insight is given towards both using and developing deep learning tools that cater to current and upcoming hardware technologies.", "title": "" }, { "docid": "0dd558f3094d82f55806d1170218efce", "text": "As the key supporting system of telecommunication enterprises, OSS/BSS needs to support the service steadily in the long-term running and maintenance process. The system architecture must remain steady and consistent in order to accomplish its goal, which is quite difficult when both the technique and business requirements are changing so rapidly. The framework method raised in this article can guarantee the system architecture’s steadiness and business processing’s consistence by means of describing business requirements, application and information abstractly, becoming more specific and formalized in the planning, developing and maintaining process, and getting the results needed. This article introduces firstly the concepts of framework method, then recommends its applications and superiority in OSS/BSS systems, and lastly gives the prospect of its application.", "title": "" }, { "docid": "4e70489d8c2108a60431b42b155f516a", "text": "The notion of ‘wireheading’, or direct reward centre stimulation of the brain, is a wellknown concept in neuroscience. In this paper, we examine the corresponding issue of reward (utility) function integrity in artificially intelligent machines. We survey the relevant literature and propose a number of potential solutions to ensure the integrity of our artificial assistants. Overall, we conclude that wireheading in rational selfimproving optimisers above a certain capacity remains an unsolved problem despite opinion of many that such machines will choose not to wirehead. A relevant issue of literalness in goal setting also remains largely unsolved and we suggest that the development of a non-ambiguous knowledge transfer language might be a step in the right direction.", "title": "" }, { "docid": "a8661d8747a8201afff10112889db151", "text": "Empathy is a multidimensional construct consisting of cognitive (inferring mental states) and emotional (empathic concern) components. Despite a paucity of research, individuals on the autism spectrum are generally believed to lack empathy. In the current study we used a new, photo-based measure, the Multifaceted Empathy Test (MET), to assess empathy multidimensionally in a group of 17 individuals with Asperger syndrome (AS) and 18 well-matched controls. Results suggested that while individuals with AS are impaired in cognitive empathy, they do not differ from controls in emotional empathy. Level of general emotional arousability and socially desirable answer tendencies did not differ between groups. Internal consistency of the MET's scales ranged from .71 to .92, and convergent and divergent validity were highly satisfactory.", "title": "" }, { "docid": "e4c27a97a355543cf113a16bcd28ca50", "text": "A metamaterial-based broadband low-profile grid-slotted patch antenna is presented. By slotting the radiating patch, a periodic array of series capacitor loaded metamaterial patch cells is formed, and excited through the coupling aperture in a ground plane right underneath and parallel to the slot at the center of the patch. By exciting two adjacent resonant modes simultaneously, broadband impedance matching and consistent radiation are achieved. The dispersion relation of the capacitor-loaded patch cell is applied in the mode analysis. The proposed grid-slotted patch antenna with a low profile of 0.06 λ0 (λ0 is the center operating wavelength in free space) achieves a measured bandwidth of 28% for the |S11| less than -10 dB and maximum gain of 9.8 dBi.", "title": "" }, { "docid": "4d276851b607fff6267ec03d6f28a471", "text": "The polysaccharide-rich wall, which envelopes the fungal cell, is pivotal to the maintenance of cellular integrity and for the protection of the cell from external aggressors - such as environmental fluxes and during host infection. This review considers the commonalities in the composition of the wall across the fungal kingdom, addresses how little is known about the assembly of the polysaccharide matrix, and considers changes in the wall of plant-pathogenic fungi during on and in planta growth, following the elucidation of infection structures requiring cell wall alterations. It highlights what is known about the phytopathogenic fungal wall and what needs to be discovered.", "title": "" }, { "docid": "839f8f079c4134641f6bf4051200dd8d", "text": "Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted definition of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a literature review, the paper provides a definition of Industrie 4.0 and identifies six design principles for its implementation: interoperability, virtualization, decentralization, real-time capability, service orientation, and modularity. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in implementing appropriate scenarios.", "title": "" }, { "docid": "50f7fd72dcd833c92efb56fb71918263", "text": "The input vocabulary for touch-screen interaction on handhelds is dramatically limited, especially when the thumb must be used. To enrich that vocabulary we propose to discriminate, among thumb gestures, those we call MicroRolls, characterized by zero tangential velocity of the skin relative to the screen surface. Combining four categories of thumb gestures, Drags, Swipes, Rubbings and MicroRolls, with other classification dimensions, we show that at least 16 elemental gestures can be automatically recognized. We also report the results of two experiments showing that the roll vs. slide distinction facilitates thumb input in a realistic copy and paste task, relative to existing interaction techniques.", "title": "" }, { "docid": "97af9704b898bebe4dae43c1984bc478", "text": "In earlier work we have shown that adults, young children, and infants are capable of computing transitional probabilities among adjacent syllables in rapidly presented streams of speech, and of using these statistics to group adjacent syllables into word-like units. In the present experiments we ask whether adult learners are also capable of such computations when the only available patterns occur in non-adjacent elements. In the first experiment, we present streams of speech in which precisely the same kinds of syllable regularities occur as in our previous studies, except that the patterned relations among syllables occur between non-adjacent syllables (with an intervening syllable that is unrelated). Under these circumstances we do not obtain our previous results: learners are quite poor at acquiring regular relations among non-adjacent syllables, even when the patterns are objectively quite simple. In subsequent experiments we show that learners are, in contrast, quite capable of acquiring patterned relations among non-adjacent segments-both non-adjacent consonants (with an intervening vocalic segment that is unrelated) and non-adjacent vowels (with an intervening consonantal segment that is unrelated). Finally, we discuss why human learners display these strong differences in learning differing types of non-adjacent regularities, and we conclude by suggesting that these contrasts in learnability may account for why human languages display non-adjacent regularities of one type much more widely than non-adjacent regularities of the other type.", "title": "" } ]
scidocsrr